playlist
stringclasses 160
values | file_name
stringlengths 9
102
| content
stringlengths 29
329k
|
---|---|---|
History_vs
|
History_vs_Augustus_Peta_Greenfield_Alex_Gendler.txt
|
His reign marked the beginning of one of history’s greatest empires and the end of one of its first republics. Was Rome’s first emperor a visionary leader who guaranteed his civilization’s place in history or a tyrant who destroyed its core values? Find out in History versus Augustus. Order, order. The defendant today is Gaius Octavius? Gaius Julius Caesar/Augustus... Do we have the wrong guy? No, your Honor. Gaius Octavius, born in 63 BCE, was the grand-nephew of Julius Caesar. He became Gaius Julius Caesar upon being named his great-uncle’s adoptive son and heir. And he gained the title Augustus in 27 BCE when the Senate granted him additional honors. You mean when he established sole authority and became emperor of Rome. Is that bad? Didn’t every place have some king or emperor back then? Actually, your Honor, the Roman people had overthrown their kings centuries before to establish a republic, a government meant to serve the people, not the privilege of a ruling family. And it was Octavius who destroyed this tradition. Octavius was a model public servant. At 16, he was elected to the College of Pontiffs that supervised religious worship. He fought for Rome in Hispania alongside his great-uncle Caesar and took up the responsibility of avenging Caesar’s death when the corrupt oligarchs in the Senate betrayed and murdered him. Caesar had been a power-hungry tyrant who tried to make himself a king while consorting with his Egyptian queen Cleopatra. After his death, Octavius joined his general Mark Antony in starting a civil war that tore Rome apart, then stabbed his ally in the back to increase his own power. Antony was a fool. He waged a disastrous campaign in Parthia and plotted to turn Roman territories into personal kingdoms for himself and Cleopatra. Isn’t that what Caesar had been accused of? Well... So Octavius destroyed Antony for trying to become a king and then became one himself? That’s right. You can see the megalomania even in his adopted title – "The Illustrious One." That was a religious honorific. And Augustus didn’t seek power for his own sake. As winner of the civil war and commander of the most troops, it was his duty to restore law and order to Rome so that other factions didn’t continue fighting. He didn’t restore the law - he made it subordinate to him! Not true. Augustus worked to restore the Senate’s prestige, improved food security for the lower classes, and relinquished control of the army when he resigned his consul post. Mere optics. He used his military influence and personal wealth to stack the Senate in his favor, while retaining the powers of a tribune and the right to celebrate military triumphs. He kept control of provinces with the most legions. And if that wasn’t enough, he assumed the consul position twice more to promote his grandchildren. He was clearly trying to establish a dynasty. But what did he do with all that power? Glad you asked, your Honor. Augustus’s accomplishments were almost too many to name. He established consistent taxation for all provinces, ending private exploitation by local tax officials. He personally financed a network of roads and employed couriers so news and troops could travel easily throughout the realm. And it was under Augustus that many of Rome’s famous public buildings were constructed. The writers of the time were nearly unanimous in praising his rule. Did the writers have any other choice? Augustus exiled plenty of people on vague charges, including Ovid, one of Rome’s greatest poets. And you forgot to mention the intrusive laws regarding citizens’ personal lives – punishing adultery, restricting marriage between social classes, even penalties for remaining unmarried. He was trying to improve the citizenry and instill discipline. And he succeeded. His legacy speaks for itself: 40 years of internal stability, a professional army that expanded Rome’s frontiers in all directions, and a government still remembered as a model of civic virtue. His legacy was an empire that would go on to wage endless conquest until it collapsed, and a tradition of military autocracy. Any time a dictator in a general’s uniform commits atrocities while claiming to act on behalf of "the people," we have Augustus Caesar to thank. So you’re saying Augustus was a good emperor, and you’re saying there’s no such thing? We’re used to celebrating historical leaders for their achievements and victories. But to ask whether an individual should have such power in the first place is to put history itself on trial.
|
History_vs
|
History_vs_Che_Guevara_Alex_Gendler.txt
|
His face is recognized all over the world. The young medical student who became a revolutionary icon. But was Che Guevara a heroic champion of the poor or a ruthless warlord who left a legacy of repression? Order, order. Hey, where have I seen that guy before? Ahem, your Honor, this is Ernesto Che Guevara. In the early 1950s, he left behind a privileged life as a medical student in Argentina to travel through rural Latin America. The poverty and misery he witnessed convinced him that saving lives required more than medicine. So he became a terrorist seeking to violently overthrow the region's governments. What? The region's governments were brutal oligarchies. Colonialism may have formally ended, but elites still controlled all the wealth. American corporations bought up land originally seized from indigenous people and used it for profit and export, even keeping most of it uncultivated while locals starved. Couldn't they vote to change that? Oh, they tried, your Honor. In 1953, Che came to Guatemala under the democratically-elected government of President Árbenz. Árbenz passed reforms to redistribute some of this uncultivated land back to the people while compensating the landowners. But he was overthrown in a CIA-sponsored coup. The military was protecting against the seizure of private property and communist takeover. They were protecting corporate profits and Che saw that they would use the fear of communism to overthrow any government that threatened those profits. So he took the lessons of Guatemala with him to Mexico. There, he met exiled Cuban revolutionaries and decided to help them liberate their country. You mean help Fidel Castro turn a vibrant Cuba into a dictatorship. Dictatorship was what Cuba had before the revolution. Fulgencio Batista was a tyrant who came to power in a military coup. He turned Havana into a luxury playground for foreigners while keeping Cubans mired in poverty and killing thousands in police crackdowns. Even President Kennedy called it the worst example of "economic colonization, humiliation, and exploitation in the world." Whatever Batista's faults, it can't compare to the totalitarian nightmare Castro would create. Forced labor camps, torture of prisoners, no freedom to speak or to leave. But this isn't the trial of Fidel Castro, is it? Che Guevara was instrumental in helping Castro seize power. As a commander in his guerilla army, he unleashed a reign of terror across the countryside, killing any suspected spies or dissenters. He also helped peasants build health clinics and schools, taught them to read, and even recited poetry to them. His harsh discipline was necessary against a much stronger enemy who didn't hesitate to burn entire villages suspected of aiding the rebels. Let's not forget that the new regime held mass executions and killed hundreds of people without trial as soon as they took power in 1959. The executed were officials and collaborators who had tormented the masses under Batista. The people supported this revolutionary justice. Which people? An angry mob crying for blood does not a democracy make. And that's not even mentioning the forced labor camps, arbitrary arrests, and repression of LGBT people that continued long after the revolution. There's a reason people kept risking their lives to flee, often with nothing but the clothes on their backs. So was that all this Che brought to Cuba? Just another violent dictatorship? Not at all. He oversaw land redistribution, helped established universal education, and organized volunteer literacy brigades that raised Cuba's literacy rate to 96%, still one of the highest in the world. Which allowed the government to control what information everyone received. Guevara's idealistic incompetence as Finance Minister caused massive drops in productivity when he replaced worker pay raises with moral certificates. He suppressed all press freedom, declaring that newspapers were instruments of the oligarchy. And it was he who urged Castro to host Soviet nuclear weapons, leading to the Cuban Missile Crisis that brought the world to the brink of destruction. He was a leader, not a bureaucrat. That's why he eventually left to spread the revolution abroad. Which didn't go well. He failed to rally rebels in the Congo and went to Bolivia even when the Soviets disapproved. The Bolivian Government, with the help of the CIA, was able to capture and neutralize this terrorist in 1967, before he could do much damage. While doing plenty of damage themselves in the process. So that was the end of it? Not at all. As Che said, the revolution is immortal. He was publicly mourned in cities all over the world. Not by the Cubans who managed to escape. And his story would inspire young activists for generations to come. Ha. A trendy symbol of rebellion for those who never had to live under his regime. Symbols of revolution may become commodified, but the idea of a more just world remains. Maybe, but I'm not sharing my coffee. Che Guevara was captured and executed by government forces in Bolivia. His remains would not be found for another 30 years. But did he die a hero or had he already become a villain? And should revolutions be judged by their ideals or their outcomes? These are the questions we face when we put history on trial.
|
History_vs
|
역사_대_블리디미르_레닌알렉스_젠들러_Alex_Gendler.txt
|
He was one of the most influential figures of the 20th century, forever changing the course of one of the world's largest countries. But was he a hero who toppled an oppressive tyranny or a villain who replaced it with another? It's time to put Lenin on the stand in History vs. Lenin. "Order, order, hmm. Now, wasn't it your fault that the band broke up?" "Your honor, this is Vladimir Ilyich Ulyanov, AKA Lenin, the rabblerouser who helped overthrow the Russian tsar Nicholas II in 1917 and founded the Soviet Union, one of the worst dictatorships of the 20th century." "Ohh." "The tsar was a bloody tyrant under whom the masses toiled in slavery." "This is rubbish. Serfdom had already been abolished in 1861." "And replaced by something worse. The factory bosses treated the people far worse than their former feudal landlords. And unlike the landlords, they were always there. Russian workers toiled for eleven hours a day and were the lowest paid in all of Europe." "But Tsar Nicholas made laws to protect the workers." "He reluctantly did the bare minimum to avert revolution, and even there, he failed. Remember what happened in 1905 after his troops fired on peaceful petitioners?" "Yes, and the tsar ended the rebellion by introducing a constitution and an elected parliament, the Duma." "While retaining absolute power and dissolving them whenever he wanted." "Perhaps there would've been more reforms in due time if radicals, like Lenin, weren't always stirring up trouble." "Your Honor, Lenin had seen his older brother Aleksandr executed by the previous tsar for revolutionary activity, and even after the reforms, Nicholas continued the same mass repression and executions, as well as the unpopular involvement in World War I, that cost Russia so many lives and resources." "Hm, this tsar doesn't sound like such a capital fellow." "Your Honor, maybe Nicholas II did doom himself with bad decisions, but Lenin deserves no credit for this. When the February 1917 uprisings finally forced the tsar to abdicate, Lenin was still exiled in Switzerland." "Hm, so who came to power?" "The Duma formed a provisional government, led by Alexander Kerensky, an incompetent bourgeois failure. He even launched another failed offensive in the war, where Russia had already lost so much, instead of ending it like the people wanted." "It was a constitutional social democratic government, the most progressive of its time. And it could have succeeded eventually if Lenin hadn't returned in April, sent by the Germans to undermine the Russian war effort and instigate riots." "Such slander! The July Days were a spontaneous and justified reaction against the government's failures. And Kerensky showed his true colors when he blamed Lenin and arrested and outlawed his Bolshevik party, forcing him to flee into exile again. Some democracy! It's a good thing the government collapsed under their own incompetence and greed when they tried to stage a military coup then had to ask the Bolsheviks for help when it backfired. After that, all Lenin had to do was return in October and take charge. The government was peacefully overthrown overnight." "But what the Bolsheviks did after gaining power wasn't very peaceful. How many people did they execute without trial? And was it really necessary to murder the tsar's entire family, even the children?" "Russia was being attacked by foreign imperialists, trying to restore the tsar. Any royal heir that was rescued would be recognized as ruler by foreign governments. It would've been the end of everything the people had fought so hard to achieve. Besides, Lenin may not have given the order." "But it was not only imperialists that the Bolsheviks killed. What about the purges and executions of other socialist and anarchist parties, their old allies? What about the Tambov Rebellion, where peasants, resisting grain confiscation, were killed with poison gas? Or sending the army to crush the workers in Kronstadt, who were demanding democratic self-management? Was this still fighting for the people?" "Yes! The measures were difficult, but it was a difficult time. The new government needed to secure itself while being attacked from all sides, so that the socialist order could be established." "And what good came of this socialist order? Even after the civil war was won, there were famines, repression and millions executed or sent to die in camps, while Lenin's successor Stalin established a cult of personality and absolute power." "That wasn't the plan. Lenin never cared for personal gains, even his enemies admitted that he fully believed in his cause, living modestly and working tirelessly from his student days until his too early death. He saw how power-hungry Stalin was and tried to warn the party, but it was too late." "And the decades of totalitarianism that followed after?" "You could call it that, but it was Lenin's efforts that changed Russia in a few decades from a backward and undeveloped monarchy full of illiterate peasants to a modern, industrial superpower, with one of the world's best educated populations, unprecedented opportunities for women, and some of the most important scientific advancements of the century. Life may not have been luxurious, but nearly everyone had a roof over their head and food on their plate, which few countries have achieved." "But these advances could still have happened, even without Lenin and the repressive regime he established." "Yes, and I could've been a famous rock and roll singer. But how would I have sounded?" We can never be sure how things could've unfolded if different people were in power or different decisions were made, but to avoid the mistakes of the past, we must always be willing to put historical figures on trial.
|
History_vs
|
History_vs_Napoleon_Bonaparte_Alex_Gendler.txt
|
After the French Revolution erupted in 1789, Europe was thrown into chaos. Neighboring countries' monarchs feared they would share the fate of Louis XVI, and attacked the New Republic, while at home, extremism and mistrust between factions lead to bloodshed. In the midst of all this conflict, a powerful figure emerged to take charge of France. But did he save the revolution or destroy it? "Order, order, who's the defendant today? I don't see anyone." "Your Honor, this is Napoléon Bonaparte, the tyrant who invaded nearly all of Europe to compensate for his personal stature-based insecurities." "Actually, Napoléon was at least average height for his time. The idea that he was short comes only from British wartime propaganda. And he was no tyrant. He was safeguarding the young Republic from being crushed by the European monarchies." "By overthrowing its government and seizing power himself?" "Your Honor, as a young and successful military officer, Napoléon fully supported the French Revolution, and its ideals of liberty, equality, and fraternity. But the revolutionaries were incapable of real leadership. Robespierre and the Jacobins who first came to power unleashed a reign of terror on the population, with their anti-Catholic extremism and nonstop executions of everyone who disagreed with them. And The Directory that replaced them was an unstable and incompetent oligarchy. They needed a strong leader who could govern wisely and justly." "So, France went through that whole revolution just to end up with another all-powerful ruler?" "Not quite. Napoléon's new powers were derived from the constitution that was approved by a popular vote in the Consulate." "Ha! The constitution was practically dictated at gunpoint in a military coup, and the public only accepted the tyrant because they were tired of constant civil war." "Be that as it may, Napoléon introduced a new constitution and a legal code that kept some of the most important achievements of the revolution in tact: freedom of religion abolition of hereditary privilege, and equality before the law for all men." "All men, indeed. He deprived women of the rights that the revolution had given them and even reinstated slavery in the French colonies. Haiti is still recovering from the consequences centuries later. What kind of equality is that?" "The only kind that could be stably maintained at the time, and still far ahead of France's neighbors." "Speaking of neighbors, what was with all the invasions?" "Great question, Your Honor." "Which invasions are we talking about? It was the neighboring empires who had invaded France trying to restore the monarchy, and prevent the spread of liberty across Europe, twice by the time Napoléon took charge. Having defended France as a soldier and a general in those wars, he knew that the best defense is a good offense." "An offense against the entire continent? Peace was secured by 1802, and other European powers recognized the new French Regime. But Bonaparte couldn't rest unless he had control of the whole continent, and all he knew was fighting. He tried to enforce a European-wide blockade of Britain, invaded any country that didn't comply, and launched more wars to hold onto his gains. And what was the result? Millions dead all over the continent, and the whole international order shattered." "You forgot the other result: the spread of democratic and liberal ideals across Europe. It was thanks to Napoléon that the continent was reshaped from a chaotic patchwork of fragmented feudal and religious territories into efficient, modern, and secular nation states where the people held more power and rights than ever before." "Should we also thank him for the rise of nationalism and the massive increase in army sizes? You can see how well that turned out a century later." "So what would European history have been like if it weren't for Napoléon?" "Unimaginably better/worse." Napoléon seemingly unstoppable momentum would die in the Russian winter snows, along with most of his army. But even after being deposed and exiled, he refused to give up, escaping from his prison and launching a bold attempt at restoring his empire before being defeated for the second and final time. Bonaparte was a ruler full of contradictions, defending a popular revolution by imposing absolute dictatorship, and spreading liberal ideals through imperial wars, and though he never achieved his dream of conquering Europe, he undoubtedly left his mark on it, for better or for worse.
|
History_vs
|
역사_와_앤드류_잭슨_제임스_페스터.txt
|
A national hero? Or public enemy number one? Historical figures are often controversial, but few were as deified or vilified in their lifetime as the seventh President of the United States. This is History vs. Andrew Jackson. "Order, order, hm, uh, what were we...ah yes, Mr. Jackson! You stand accused of degrading the office of the presidency, causing financial collapse and wanton cruelty against American Indians. How do you plead?" "Now, Your Honor, I am not a big city lawyer, but I do know a few things. And I know that President Jackson was a self-made frontiersman, a great general, a real man of the people." "Your Honor, this 'man of the people' was a gambler, a drunk, and a brawler. Why, I've heard it said that he would fight at the drop of the hat and then drop the hat himself. I ask you, was such a man fit for the most distinguished office in the nation? Can we forget the debacle of his inauguration? Who ever heard of inviting a drunken mob into the White House? It took ages to get the upholstery clean." "That drunken mob, sir, was the American people, and they deserve to celebrate their victory." "Order, order! Now, did this celebration have pie?" "Very well. Mr. Jackson, is it not the case that immediately upon assuming office you introduced the spoils system, replacing hundreds of perfectly good federal employees with incompetent party loyalists?" "Your Honor, the President did no such thing. He tried to institute rotation in office to avoid any profiteering or funny business. It was the rest of the party who insisted on giving posts to their lackeys." "But Mr. Jackson complied, did he not?" "Now, uh, see here." "Moving on. Mr. Jackson, did you not help to cause the financial Panic of 1837, and the ensuing economic depression with your obsessive war against the Bank of the United States? Was not vetoing its reauthorization, as you did in 1832, an act of irresponsible populace pandering that made no economic sense?" "Your Honor, the gentleman has quite the imagination. That bank was just a way for rich Yanks to get richer. And all that money panic was caused when British banks raised interest rates and cut lending. To blame it on the President is preposterous, I say." "But if Mr. Jackson had not destroyed the National Bank, it would have been able to lend to farmers and businesses when other credit dried up, would it not?" "Hm, this is all highly speculative. Can we move on?" "Certainly, Your Honor. We now come to Mr. Jackson's most terrible offense: forcing entire tribes out of their native lands via the Indian Removal Act." "I resent that accusation, sir. The U.S. of A. bought that land from the Indians fair and square." "Do you call coercion and threats by a nation with a far more powerful army fair and square? Or signing a treaty for removing the Cherokee with a small group that didn't include their actual leaders? They didn't have time to properly supply themselves before the army came and forced them to march the Trail of Tears." "Now, hold on a minute. This was all Van Buren's doing after President Jackson left office." "But Mr. Jackson laid the groundwork and made sure the treaty was ratified. All President Van Buren had to do afterwards was enforce it." "Look here, Your Honor. Our government's been purchasing Indian land since the beginning, and my client was negotiating these deals even before he was President. President Jackson truly believed it was best for the Indians to get compensated for their land and move out West, where there was plenty of space for them to keep living the way they were accustomed, rather than stick around and keep butting heads with the white settlers. Some of whom, I remind our court, wanted to exterminate them outright. It was a different time." "And yet, even in this different time, there were many in Congress and even the Supreme Court who saw how wrong the Removal Act was and loudly opposed it, were there not?" "My client was under a great deal of pressure. I say, do you think it's easy governing such a huge country and keeping the Union together, when states are fixing to nullify federal laws? President Jackson barely got South Carolina to back down over those import tariffs, and then Georgia had to go discover gold and start grabbing up Cherokee land. It was either get the Indians to move or get in another fight with a state government." "So, you admit that Mr. Jackson sacrified moral principles to achieve some political goals?" "I do declare, show me one leader who hasn't." As societies change and morals evolve, yesterday's hero may become tomorrow's villain, or vice versa. History may be past, but our understanding of it is always on trial.
|
History_vs
|
역사_대_크리스토퍼_콜롬버스_알렉스_젠들러.txt
|
Many people in the United States and Latin America have grown up celebrating the anniversary of Christopher Columbus's voyage, but was he an intrepid explorer who brought two worlds together or a ruthless exploiter who brought colonialism and slavery? And did he even discover America at all? It's time to put Columbus on the stand in History vs. Christopher Columbus. "Order, order in the court. Wait, am I even supposed to be at work today?" Cough "Yes, your Honor. From 1792, Columbus Day was celebrated in many parts of the United States on October 12th, the actual anniversary date. But although it was declared an official holiday in 1934, individual states aren't required to observe it. Only 23 states close public services, and more states are moving away from it completely." Cough "What a pity. In the 70s, we even moved it to the second Monday in October so people could get a nice three-day weekend, but I guess you folks just hate celebrations." "Uh, what are we celebrating again?" "Come on, Your Honor, we all learned it in school. Christopher Columbus convinced the King of Spain to send him on a mission to find a better trade route to India, not by going East over land but sailing West around the globe. Everyone said it was crazy because they still thought the world was flat, but he knew better. And when in 1492 he sailed the ocean blue, he found something better than India: a whole new continent." "What rubbish. First of all, educated people knew the world was round since Aristotle. Secondly, Columbus didn't discover anything. There were already people living here for millennia. And he wasn't even the first European to visit. The Norse had settled Newfoundland almost 500 years before." "You don't say, so how come we're not all wearing those cow helmets?" "Actually, they didn't really wear those either." Cough "Who cares what some Vikings did way back when? Those settlements didn't last, but Columbus's did. And the news he brought back to Europe spread far and wide, inspiring all the explorers and settlers who came after. Without him, none of us would be here today." "And because of him, millions of Native Americans aren't here today. Do you know what Columbus did in the colonies he founded? He took the very first natives he met prisoner and wrote in his journal about how easily he could conquer and enslave all of them." "Oh, come on. Everyone was fighting each other back then. Didn't the natives even tell Columbus about other tribes raiding and taking captives?" "Yes, but tribal warfare was sporadic and limited. It certainly didn't wipe out 90% of the population." "Hmm. Why is celebrating this Columbus so important to you, anyway?" "Your Honor, Columbus's voyage was an inspiration to struggling people all across Europe, symbolizing freedom and new beginnings. And his discovery gave our grandparents and great-grandparents the chance to come here and build better lives for their children. Don't we deserve a hero to remind everyone that our country was build on the struggles of immigrants?" "And what about the struggles of Native Americans who were nearly wiped out and forced into reservations and whose descendants still suffer from poverty and discrimination? How can you make a hero out of a man who caused so much suffering?" "That's history. You can't judge a guy in the 15th century by modern standards. People back then even thought spreading Christianity and civilization across the world was a moral duty." "Actually, he was pretty bad, even by old standards. While governing Hispaniola, he tortured and mutilated natives who didn't bring him enough gold and sold girls as young as nine into sexual slavery, and he was brutal even to the other colonists he ruled, to the point that he was removed from power and thrown in jail. When the missionary, Bartolomé de las Casas, visited the island, he wrote, 'From 1494 to 1508, over 3,000,000 people had perished from war, slavery and the mines. Who in future generations will believe this?'" "Well, I'm not sure I believe those numbers." "Say, aren't there other ways the holiday is celebrated?" "In some Latin American countries, they celebrate the same date under different names, such as Día de la Raza. In these places, it's more a celebration of the native and mixed cultures that survived through the colonial period. Some places in the U.S. have also renamed the holiday, as Native American Day or Indigenous People's Day and changed the celebrations accordingly." "So, why not just change the name if it's such a problem?" "Because it's tradition. Ordinary people need their heroes and their founding myths. Can't we just keep celebrating the way we've been doing for a century, without having to delve into all this serious research? It's not like anyone is actually celebrating genocide." "Traditions change, and the way we choose to keep them alive says a lot about our values." "Well, it looks like giving tired judges a day off isn't one of those values, anyway." Traditions and holidays are important to all cultures, but a hero in one era may become a villain in the next as our historical knowledge expands and our values evolve. And deciding what these traditions should mean today is a major part of putting history on trial.
|
History_vs
|
징기스칸_대_역사_알렉스_젠들러.txt
|
He was one of the most fearsome warlords who ever lived, waging an unstoppable conquest across the Eurasian continent. But was Genghis Khan a vicious barbarian or a unifier who paved the way for the modern world? We'll see in "History vs. Genghis Khan." "Order, order. Now who's the defendant today? Khan!" "I see Your Honor is familiar with Genghis Khan, the 13th century warlord whose military campaigns killed millions and left nothing but destruction in their wake." "Objection. First of all, it's pronounced Genghis Kahn." "Really?" "In Mongolia, yes. Regardless, he was one of the greatest leaders in human history. Born Temüjin, he was left fatherless and destitute as a child but went on to overcome constant strife to unite warring Mongol clans and forge the greatest empire the world had seen, eventually stretching from the Pacific to Europe's heartland." "And what was so great about invasion and slaughter? Northern China lost 2/3 of its population." "The Jin Dynasty had long harassed the northern tribes, paying them off to fight each other and periodically attacking them. Genghis Khan wasn't about to suffer the same fate as the last Khan who tried to unite the Mongols, and the demographic change may reflect poor census keeping, not to mention that many peasants were brought into the Khan's army." "You can pick apart numbers all you want, but they wiped out entire cities, along with their inhabitants." "The Khan preferred enemies to surrender and pay tribute, but he firmly believed in loyalty and diplomatic law. The cities that were massacred were ones that rebelled after surrendering, or killed as ambassadors. His was a strict understanding of justice." "Multiple accounts show his army's brutality going beyond justice: ripping unborn children from mothers' wombs, using prisoners as human shields, or moat fillers to support siege engines, taking all women from conquered towns--" "Enough! How barbaric!" "Is that really so much worse than other medieval armies?" "That doesn't excuse Genghis Khan's atrocities." "But it does make Genghis Khan unexceptional for his time rather than some bloodthirsty savage. In fact, after his unification of the tribes abolished bride kidnapping, women in the Mongol ranks had it better than most. They controlled domestic affairs, could divorce their husbands, and were trusted advisors. Temüjin remained with his first bride all his life, even raising her possibly illegitimate son as his own." "Regardless, Genghis Khan's legacy was a disaster: up to 40 million killed across Eurasia during his descendents' conquests. 10% of the world population. That's not even counting casualties from the Black Plague brought to Europe by the Golden Horde's Siege of Kaffa." "Surely that wasn't intentional." "Actually, when they saw their own troops dying of the Plague, they catapulted infected bodies over the city walls." "Blech." "The accounts you're referencing were written over a hundred years after the fact. How reliable do you think they are? Plus, the survivors reaped the benefits of the empire Genghis Khan founded." "Benefits?" "The Mongol Empire practiced religious tolerance among all subjects, they treated their soldiers well, promoted based on merit, rather than birth, established a vast postal system, and inforced universal rule of law, not to mention their contribution to culture." "You mean like Hulagu Khan's annihilation of Baghdad, the era's cultural capital? Libraries, hospitals and palaces burned, irrigation canals buried?" "Baghdad was unfortunate, but its Kalif refused to surrender, and Hulagu was later punished by Berke Khan for the wanton destruction. It wasn't Mongol policy to destroy culture. Usually they saved doctors, scholars and artisans from conquered places, and transferred them throughout their realm, spreading knowledge across the world." "What about the devastation of Kievan Rus, leaving its people in the Dark Ages even as the Renaissance spread across Western Europe?" "Western Europe was hardly peaceful at the time. The stability of Mongol rule made the Silk Road flourish once more, allowing trade and cultural exchange between East and West, and its legacy forged Russia and China from warring princedoms into unified states. In fact, long after the Empire, Genghis Khan's descendants could be found among the ruling nobility all over Eurasia." "Not surprising that a tyrant would inspire further tyrants." "Careful what you call him. You may be related." "What?" "16 million men today are descended from Genghis Khan. That's one in ever 200." For every great conqueror, there are millions of conquered. Whose stories will survive? And can a leader's historical or cultural signifigance outweigh the deaths they caused along the way? These are the questions that arise when we put history on trial.
|
History_vs
|
History_vs_Sigmund_Freud_Todd_Dufresne.txt
|
Working in Vienna at the turn of the 20th century, he began his career as a neurologist before pioneering the discipline of psychoanalysis. He proposed that people are motivated by unconscious desires and repressed memories, and their problems can be addressed by making those motivations conscious through talk therapy. His influence towers above that of all other psychologists in the public eye. But was Sigmund Freud right about human nature? And were his methods scientific? Order, order. Today on the stand we have… Dad? Ahem, no, your honor. This is Doctor Sigmund Freud, one of the most innovative thinkers in the history of psychology. An egomaniac who propagated pseudoscientific theories. Well, which is it? He tackled issues medicine refused to address. Freud’s private practice treated women who suffered from what was called hysteria at the time, and their complaints hadn’t been taken seriously at all. From the women with depression he treated initially to World War I veterans with PTSD, Freud’s talking cure worked, and the visibility he gave his patients forced the medical establishment to acknowledge their psychological disorders were real. He certainly didn’t help all his patients. Freud was convinced that our behavior is shaped by unconscious urges and repressed memories. He invented baseless unconscious or irrational drivers behind the behavior of trauma survivors— and caused real harm. How’s that? He misrepresented some of his most famous case studies, claiming his treatment had cured patients when in fact they had gotten worse. Later therapists influenced by his theories coaxed their patients into "recovering" supposedly repressed memories of childhood abuse that never happened. Lives and families were torn apart. You can’t blame Freud for later misapplications of his work— that would be projecting. Plenty of his ideas were harmful without any misapplication. He viewed homosexuality as a developmental glitch. He coined the term penis envy— meaning women are haunted for life by their lack of penises. Freud was a product of his era. Yes, some of the specifics were flawed, but he created a new space for future scientists to explore, investigate, and build upon. Modern therapy techniques that millions of people rely on came out of the work he started with psychoanalysis. And today everyone knows there’s an unconscious— that idea was popularized Freud. Psychologists today only believe in a “cognitive unconscious,” the fact that you aren’t aware of everything going on at a given moment. Freud took this idea way too far, ascribing deep meaning to everything. He built his theories on scientific ideas that were outdated even in his own time, not just by today’s standards— for example, he thought individual psychology is derived from the biological inheritance of events in ancient history. And I mean ancient— like the Ice Age or the killing of Moses. Freud and his closest allies actually believed these prehistorical traumas had ongoing impacts on human psychology. He thought that the phase of cold indifference to sexuality during pubescence was literally an echo of the Ice Age. With fantastical beliefs like these, how can we take him seriously? Any renowned thinker from centuries past has ideas that seem fantastical by today’s standards, but we can’t discount their influence on this basis. Freud was an innovator linking ideas across many fields. His concepts have become everyday terms that shape how we understand and talk about our own experiences. The Oedipus complex? Ego and id? Defense mechanisms? Death wishes? All Freud. But Freud didn’t present himself as a social theorist— he insisted that his work was scientific. Are you saying he… repressed inconvenient facts? Freud’s theories were unfalsifiable. Wait, so you’re saying he was right? No, his ideas were framed so that there’s no way to empirically verify them. Freud didn’t even necessarily believe in the psychoanalysis he was peddling. He was pessimistic about the impact of therapy. What! I think I need to lie down! Many of Sigmund Freud’s ideas don’t hold up to modern science, and his clinical practices don’t meet today’s ethical standards. At the same time, he sparked a revolution in psychology and society, and created a vocabulary for discussing emotion. Freud made his share of mistakes. But is a thinker responsible for how subsequent generations put their ideas to use? Do they deserve the blame, credit, or redemption when we put history on trial?
|
History_vs
|
Why_is_Marie_Antoinette_so_controversial_Carolyn_Harris.txt
|
Order! Order! Who’s the defendant today? Looks pretty fancy. Indeed, Your Honor. This is Marie Antoinette, the Queen of France who was notorious for living in opulence while the peasants starved. That is sensationalist slander. Marie Antoinette had little power over her circumstances and spent her brief life trying to survive in a turbulent, foreign country. You mean she wasn't French? That’s right, Your Honor. She was born in 1755 as the Hapsburg Archduchess Maria Antonia. After two of her older sisters passed away, she became the only choice for a political marriage to Louis-August, heir to the French throne. Essentially, she was sacrificed to secure peace between Austria and France, all at the age of 14. She seemed to have had adjusted to this “sacrifice” by 1774 when her husband was crowned king. She lived a life of luxury, wearing elaborate headdresses, importing foreign fabrics— she even had her own private chateau near Versailles! Meanwhile, France was in an economic tailspin. Bad harvests resulted in mass food shortages, wages were falling, and the cost of living had skyrocketed. Marie Antoinette’s expensive tastes were completely insensitive to the plight of her subjects. She was the Queen! If she hadn’t looked glamorous, she would have been criticized. Besides, she sometimes used her image for good. After convincing the King to be vaccinated against smallpox, she commissioned a special headdress to make the treatment fashionable for all. She also used her influence to appoint unqualified friends and admirers to important posts. Even more disastrous, she encouraged the King to get involved in the American Revolution, a conflict that cost France 1.5 billion francs. Objection! The Queen had very little influence over her husband’s political decisions at that time. Besides, France’s financial crisis was much more related to the country’s outdated tax system and lack of an effective central bank. How so? While France's nobility and clergy had numerous tax exemptions, peasants often paid more than half their income in taxes. This system buried France in debt long before the Queen's arrival. Her personal expenses were merely a scapegoat for decades of financial negligence. That doesn’t change that Marie Antoinette spent tax money on luxuries while the masses starved! She was so oblivious that when she heard people couldn’t afford bread, she recommended they eat cake instead. This is almost certainly a fabrication attributed to the Queen by her enemies. In fact, Marie Antoinette frequently engaged in charity work focused on addressing poverty. Her reputation as a heartless queen was based on rumors and slander. Even the most famous case against her was a complete fraud. Pardon? In 1784, a thief forged fake letters from the Queen to purchase an outrageously expensive diamond necklace. The truth came out eventually, but the public already saw her as a wasteful spendthrift. Meanwhile, it's really her husband who ruined France's finances. On that, we agree. Louis XVI was an incompetent king. Even after the revolution began and he lost much of his power to the newly formed National Assembly, he refused to yield control. Louis vetoed numerous pieces of legislation— and he was supported by his conservative Queen. To a point. Marie Antoinette believed in the divine right of kings, but despite personal reservations, she tried to work with reformers. Though all she got in return were false reports that she was sleeping with them. No amount of charity work could counter this avalanche of slander. The revolutionaries also prevented the King’s family from leaving Paris— how could she negotiate with people keeping her prisoner? Well, they were right to do so! In 1791, the royal couple tried fleeing to Austria to gather support and regain power. Even after they were caught, the King and Queen continued to pass military secrets to their Austrian contacts. Isn't that treason? Certainly, and Louis was executed for it, alongside 32 other charges. Even if you believe the King's execution was just, there's no excuse for how the new government treated Marie Antoinette. She was separated from her son and kept in a cell with no privacy. The tribunal in charge of prosecuting the Queen had no proof of her treason, so they denigrated her with baseless accusations of incest and orgies. Yet she maintained composure until the very end. The Queen’s final words were an apology to her executioner for stepping on his foot. However refined she may have been, Marie Antoinette was willing to betray her country to stay in power. In life and death, she remains a symbol of everything wrong with the decadent monarchy. A convenient symbol— and an example of the public’s appetite for smearing prominent women with their own fantasies and frustrations. So what you’re saying is she was guilty of being Queen? Should monarchs be judged by their personal qualities or the historical role they occupied? And can even the powerful be victims of circumstance? These are the questions that arise when we put history on trial.
|
History_vs
|
History_vs_Henry_VIII_Mark_Robinson_and_Alex_Gendler.txt
|
He was a powerful king whose break with the church of Rome would forever change the course of English history. But was he a charismatic reformer or a bullying tyrant? Find out on History versus Henry VIII. Judge: Order, order. Now, who do we have here? Looks like quite the dashing fellow. Defense: Indeed, your honour. This is Henry VIII, the acclaimed king who reformed England's religion and government and set it on course to becoming a modern nation. Prosecutor: I beg to differ. This is a cruel, impulsive, and extravagant king who had as little regard for his people as he did for his six wives. Judge: Six wives? Defense: Your honor, Henry's first marriage was arranged for him when he was only a child. He only married Catherine of Aragon to strengthen England’s alliance with Spain. Prosecutor: An alliance he was willing to toss aside with no regard for the nation. Defense: Henry had every regard for the nation. It was imperative to secure the Tudor dynasty by producing a male heir – something Catherine failed to do in over twenty years of marriage. Prosecutor: It takes two to make an heir, your honor. Defense: Ahem. Regardless, England needed a new queen to ensure stability, but the Pope refused to annul the union and let the king remarry. Judge: Sounds like quite a pickle. Can’t argue with the Pope. Prosecutor: And yet that’s exactly what the king decided to do. He uprooted the country’s religious foundations and broke the Church of England away from Rome, leading to centuries of strife. Defense: All Henry did was give the Church honest domestic leadership. He freed his subjects from the corrupt Roman Catholic establishment. And by rejecting the more radical changes of the Protestant reformation, he allowed his people to preserve most of their religious traditions. Prosecutor: Objection! The Church had been a beloved and popular institution that brought comfort and charity to the masses. Thanks to Henry, church property was seized; hospitals closed, and precious monastic libraries lost forever, all to enrich the Crown. Defense: Some of the funds were used to build new cathedrals and open secular schools. And it was necessary for England to bring its affairs under its own control rather than Rome’s. Prosecutor: You mean under Henry’s control. Defense: Not true. All of the king’s major reforms went through Parliament. No other country of the time allowed its people such a say in government. Prosecutor: He used Parliament as a rubber stamp for his own personal will. Meanwhile he ruled like a tyrant, executing those he suspected of disloyalty. Among his victims were the great statesman and philosopher Thomas More – once his close friend and advisor – and Anne Boleyn, the new queen Henry had torn the country apart to marry. Judge: He executed his own wife? Defense: That…wasn’t King Henry’s initiative. She was accused of treason in a power struggle with the King’s minister, Thomas Cromwell. Prosecutor: The trial was a sham and she wouldn’t have been convicted without Henry’s approval. Besides, he wasn’t too upset by the outcome - he married Jane Seymour just 11 days later! Defense: A marriage that, I note, succeeded in producing a male heir and guaranteeing a stable succession… though the new queen tragically died in childbirth. Prosecutor: This tragedy didn’t deter him from an ill conceived fourth marriage to Anne of Cleves, which Henry then annulled on a whim and used as an excuse to execute Cromwell. As if that weren’t enough, he then married Catherine Howard – a cousin of Anne Boleyn – before having her executed too. Defense: She was engaged in adultery to which she confessed! Regardless, Henry’s final marriage to Catherine Parr was actually very successful. Prosecutor: His sixth! It only goes to show he was an intemperate king who allowed faction and intrigue to rule his court, concerned only with his own pleasure and grandiosity. Defense: That grandiosity was part of the king’s role as a model for his people. He was a learned scholar and musician who generously patronized the arts, as well as being an imposing warrior and sportsman. And the lavish tournaments he hosted enhanced England’s reputation on the world stage. Prosecutor: And yet both his foreign and domestic policies were a disaster. His campaigns in France and his brutal invasion of Scotland drained the treasury, and his attempt to pay for it by debasing the coinage led to constant inflation. The lords and landowners responded by removing access to common pastures and turning the peasant population into beggars. Defense: Beggars who would soon become yeomen farmers. The enclosures made farming more efficient, and created a labor surplus that laid the foundation for the Industrial Revolution. England would never have become the great power that it did without them …and without Henry. Judge: Well, I think no matter what, we can all agree he looks great in that portrait. A devout believer who broke with the Church. A man of learning who executed scholars. A king who brought stability to the throne, but used it to promote his own glory, Henry VIII embodied all the contradictions of monarchy on the verge of the modern era. But separating the ruler from the myth is all part of putting history on trial.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
10_Marxs_Theory_of_Capitalism.txt
|
Prof: So one way you can think about comparisons between Marxism and the utilitarian tradition is that both have a micro story and a macro story. So when you think about the utilitarian tradition we have the micro story of the Pareto principle and how each individual transaction will go, and then we have a macro story that if you allow people to transact freely you get the most efficient utilitarian result over time. And so you get the rights-utility synthesis, at least in the aspiration. Another way of putting that is what Adam Smith famously referred to as an invisible hand theory; that while no individual participant is interested in generating an efficient aggregate outcome, indeed, each individual participant is just trying to get on as high an indifference curve as possible. The byproduct of their collective individually selfish actions is that beneficent overall result, and so that's the invisible hand of the market that Smith talks about in The Wealth of Nations. Marx, too, has a micro story, a macro story, and an invisible hand theory, and that's what we're going to spend today's lecture and next Monday's lecture talking about. Today, principally, we're going to focus on his micro story, his story of how wealth gets created at the micro level under capitalism, and then we will link it on Monday to his macro story, and he too, you will see, has an invisible hand theory. That is to say the particular players at the micro level confront incentives to behave in certain ways, and the byproduct of their actions is a macro result which nobody understands or intends. The difference is we'll see that for Marx the invisible hand isn't entirely benevolent. It's benevolent for a while, but then it start to become malevolent over time because of the way that the dynamics of capitalism play themselves out. So that's the sort of where we're headed in the medium-term in the next couple of lectures. I want, though, before heading into the micro story to make a couple of conceptual points. Remember that when we look at Enlightenment thinkers we're concerned first and foremost with the fact that they are all motivated by the idea of individual freedom as the highest good, and basing politics on scientific principles rather than appeals to custom, or natural law, or religion, or tradition, or anything else. And I just want to highlight what Marx's assumptions about those two things are. I've actually mentioned these in one way or another in last Monday's lecture, but just to emphasize, first of all, the idea of freedom in Marx is captured beautifully in this very famous passage that he wrote, actually a relatively young Marx, his critique of the Hegelians and their followers Feuerbach in a work called The German Ideology. He has this very vivid description of what he means by unfreedom, the antithesis of his ideal. He says, [T]he division of labour (which we talked about last time) offers us the first example of how, as long as man remains in natural society, that is, as long as a cleavage exists between the particular and the common interest, as long, therefore, as activity is not voluntarily, but naturally, divided, man's own deed becomes an alien power opposed to him, which enslaves him instead of being controlled by him. For as soon as the division of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society (this is his utopian ideal), where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow (this is maybe the most famous line in all of Marx), to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. So this is a utopian Marx, this idea that as soon as you have a division of labor we're no longer free. We're enslaved to our particular place in the division of labor. Remember what I read to you from Adam Smith's description of the pin factory? Somebody spends their whole life stretching that wire for the pin. Somebody spends their whole life putting the heads on the pin. That's the sense in which Marx wants to say we're alienated from our true selves once the division of labor sets in, and we can only return to being fully rounded human beings at one with ourselves when the division of labor ultimately is abolished, and a precondition for that is the super abundance of wealth that capitalism is historically destined to create. And how we will finally get there is the story Marx believed of how his macro theory would play out over time. And we will only reach a communist society where people can be genuinely free once the need for a division of labor has been obviated by the existence of a super abundance of wealth. So all of that's coming, but this is just to underscore that his basic ideal is freedom and not equality, and his argument is going to be, as I said to you last time, that the condition for the free development of each is the free development of all. Second, just to come back to his and underscore his theory of science, Marx is an objectivist. He believes that there are forces operating in history, which he calls the materialist conception of history, which shape the way history unfolds. History has a direction. It's moving through a variety of phases, each of which has internal contradictions, and the way they play themselves out is going to determine what happens. So the tension in a feudal society between the peasants and the lords is going to determine the way in which conflict in that society occurs. Then you'll get the emergence of a bourgeoisie class and they will create capitalism, but that will bring forth a working class who will get into conflicts with them, and as those conflicts play out you'll eventually get communism and socialism. It's a reductionist theory. As I said to you before, think of Bill Clinton's line, "It's the economy, stupid." Most of what happens, if not all of what happens, that's of any importance in politics is determined by economic interests. Whether people understand this or not, or agree with it or not is beside the point. And that brings us to a third element of this Marxian science that I just want to underscore, which will be relevant for today's lecture, is that Marx makes a distinction between a class-in-itself and a class-for-itself. I want to spend a couple of minutes on, first, his definition of class, and then this "in-itself/ for-itself" distinction. A lot of people say that Marx is anti-individual. He's not individualist because he thinks about things in terms of classes and class conflict. That is a very superficial reading of Marx. As a very eminent scholar called Jon Elster has pointed out in many books that he's written on the subject, actually Marx is an individualist because if you look at how he defines classes he defines them by reference to the way in which individuals relate to the means of production. So what is it that makes you working class as opposed to a member of the capitalist class? What makes you working class is if you have to sell your labor-power to somebody else in order to live. So if you want to know whether somebody is working class you say, "Does he have to sell his labor-power to somebody else in order to live or not?" If the answer's yes, he's working class. If the answer's no, he's not working class. So you might say, "Hmm, well, I have to come here and work everyday to get paid my salary. I have to sell my labor to the university in that sense, but I'm not working class. Somebody else might tell me I'm working class, but they can speak for themselves. I drive a nice car. I live in the suburbs. I'm not working class." Marx's answer is to appeal to this distinction between a class-in-itself and a class-for-itself. The answer is, "Shapiro, you're just deluded. You might think you're not working class, but that's because you don't understand your objective position in the division of labor. You are working class whether you understand it or not. And that a class I'm a working class in the class-in-itself sense. I'm not working class in the class-for-itself if I don't believe that I'm working class. So the "in-itself" represents the objective definition of class, the "for-itself" is the subjective; whether people actually believe they're working class and perceive themselves to be working class. But what matters for the movement of history is the "in-itself." What's ultimately going to drive everything is the objective logic of class conflict, not the subjective logic of what people think is their position in the social division of labor. So it's important to keep those two things separate. Now, Marx, as I've I already said, was a utopian thinker, and he does ultimately want to say, for reasons that I'll get into later today and in more depth on Monday, he does ultimately want to say that what differentiates a communist revolution from all previous revolutions is that these two things will become synonymous; that for the first time in history the working class-for-itself will come to see itself as the working class-in-itself. We will understand our objective position in the division of labor and self-consciously create the new order, whereas in all previous modes of production, people don't really understand their position in the objective scheme of things and so they create orders as invisible hands, to go back to Adam Smith's term. They create orders as byproducts of their intended activities, not as products of their intentional design. And so that's that distinction between "in-itself" and "for-itself." Now, you might listen to all of this and you could say, "Well, it's a kind of quaint nineteenth-century idea, but it doesn't have much to do with the way people live in the world today." After all, as we have all noticed with great intensity in the past year and a half of the financial crisis, many people who work for wages also have money in the stock market. Well, but then they're capitalists, aren't they, because stock is capital? So I have my retirement money in my TIAA-CREF, as college professors get it, in the stock market, so I'm an owner of capital as well as a worker. And this sort of binary distinction between the working class and the capitalist class is just too simple-minded. What do you think Marx would say to that? Anybody? Any takers? I'm not really working class because I've got money, and I have stocks invested in all kinds of things, mutual funds, so I'm not really working class. Anyone think what Marx might say to that? It's not easy. I think that what he would say is this. He would say, "Well, the definition of working class is that you have to work for somebody else in order to live. Not that you choose to work for somebody else in order to live. So if I have enough money in my TIAA-CREF invested in the stock market that I can quit tomorrow and do just fine, then I'm not working class. So it's this element of compulsion that makes you working class. So it's not important from Marx's point of view that some of us might own stocks, but if we really owned enough stock that we were not in this position of having to work for somebody else in order to live we wouldn't be working class, and so people could move in and out of the working class. Indeed, I know people on the Yale faculty who two years ago were about to retire and live off their accumulated TIAA-CREF, and then it went up in smoke when the wheels came off the economy, and now they're not retiring. So what Marx would say is, "Well, the managed to get out of the working class in theory, in principle, but then they fell back down into the working class. So it's this element of compulsion. They're not retiring because they don't have enough, and therefore they have to work for somebody else in order to live. So that's the distinction between a class-in-itself and a class-for-itself that drives his theory. Okay, so what is the theory? We're back to Locke. Those of you who read chapter five of The Second Treatise carefully, when we looked at Locke and his theory of property saw how for Locke the workmanship ideal plays out in the economic realm with the proposition that most of the value of a product comes from work. Very crude in Locke; at one point he says, "Ninety percent of it comes from work. The rest is natural resources." At another point he says, "Ninety-nine percent of it comes from work. The rest is natural resources." But Locke doesn't really have any mechanism by which labor supposedly creates value. And in the seventeenth and eighteenth century there were really three theories floating around. There were, as I think I mentioned to you on Monday, there were in Europe, particularly France, the physiocrats who said value comes from the land, but there were English theorists who looked at them and said, "No. That can't be right because look how rich Holland is and they don't have a lot of land," and so they thought value came from trade. But then the third group, starting in the seventeenth century with Hobbes, Sir William Petty, Locke and others said, "No, value comes from work. The source of value is work," and that's what Marx picks up on in trying to develop a theory of value. And his theory is more refined than Locke's, more refined than Smith's, and more refined than Ricardo's. It starts with this idea of socially necessary labor time, or SNLT, and he defines SNLT as the labor time required to produce any use-value. Anyone remind us what is a use-value? Monday's lecture, what's a... Yes? Yes? Student: Is it useful? Prof: Right. So anything with utility is a use-value, right? So it's the amount of labor time required to produce any use value under the conditions of production normal for a given society and with the average degree of skill and intensity of the labor prevalent in the society. So let's suppose Leonid and I are making typewriters. One of the big sort of conundrums of labor theories of value have been, well, what if Leonid is a perfectionist and he spends twice as long making his typewriter as I spend making my typewriter, but most of the perfectionism is actually in his own head because at the end of the day they're the same? Marx would say--he would not say Leonid's typewriter was worth twice Shapiro's typewriter because he put twice as much labor into it. No, because half of his labor would be socially unnecessary. So it's socially necessary labor time. Or if a new labor saving device was created so that the keys could be made much more quickly than our laborious doing it with file, and I use the new machine, the labor saving device, and Leonid doesn't, so my typewriters get produced more quickly. His typewriter is not going to sell for more than my typewriter because, again, he's using socially unnecessary labor. So that's the notion of "the average degree of skill and intensity of labor prevalent in the society at the time." So that's what's going to determine the value of commodities, the long-term--remember we talked about the natural value, the long-term equilibrium price of a commodity, is going to be determined by the amount of labor power that is socially necessary to produce it. And this idea of "socially necessary" includes everybody working at the same or an average intensity, and it includes using available technologies because if some use them, and others don't, those who don't are going to be deploying unnecessary labor time and that's not going to be reflected in the value of the product because in a competitive market the most cheaply priced commodity is going to beat the others out. Important to remember, Marx works with a model of a perfectly competitive economy, right, in which equivalents exchange for equivalents. No cheating; I mentioned that on Monday. His model is a perfectly competitive economy, at least at the beginning. He's got a view about how economies evolve over time to become uncompetitive, and we'll get to that. But initially, and in his analytic story, in a competitive economy the most cheaply produced products are going to drive out the less cheaply produced, and so it's socially necessary labor time, not actual. It doesn't matter how much blood, sweat and tears Leonid actually uses. What counts is how much he needs to use, okay? So that--you could say, "Okay, but that's not the beginning of the whole story," and to get into that we have to go a step further with Marx and look not just at the labor theory of value, but the labor theory of surplus value. This is where the rubber meets the road for Marx. What he wants to say is, "Living human labor-power is the only source of fresh value, of exchange value." So this is the workmanship ideal on crack, okay? Living human labor-power is the only source of fresh value. So when Locke said, "Work is ninety percent, or maybe it's ninety-nine percent of the value of commodities and the rest comes from the common," Marx is going to say, "It's a hundred percent. All exchange value comes from work." And what he's thinking of here is two things. One is he wants to say, "If you look at all the commodities that are produced in a market economy the one thing they all have in common is that they're products of human work." It's the single common denominator, right? The capacity to work is a commodity like any other so that, we'll get into this in more detail in a minute, but when you think about what it is that people are paid, what is it that a worker is paid, it's determined like the value of any other commodity. So when we say, "What is the value of a book?" It's determined by the amount of labor power necessary to produce that book. And when we say, "What is the value of a steel worker?" It's determined by the amount of labor power necessary to produce that steel worker. The worker is a commodity in just the same sense as the book is a commodity. So wages are determined by the cost of producing the worker, and the reason a physicist gets paid more than a manual laborer then is simple. It costs more to produce the physicist than it costs to produce the manual laborer. All the training, and education, and so on, that goes into the physicist costs more than the training and education that goes into producing a manual laborer. So wages are not explained by the value of what the worker produces. This is one of the things people often get wrong. Values are explained by the cost of producing the worker. I'll say that again. It's important. Wages are explained not by the value of what the worker produces, but rather by the cost of producing the worker. So what do you think of that? What would somebody say? Somebody who thought this was--what would Marx say about, "Why is a Michelangelo painting worth so much, or why does a professional baseball player get tens of millions, maybe hundreds of millions of dollars if this is true?" Anyone think of what Marx would say? Anyone want to guess? Yeah? Student: Would he say it's because like how much time they had to practice, and how much money and time they put into learning their skills? Prof: That would be part of it, but still there are only so many hours in a day. How do you get up to a hundred million dollar salary? I mean, you're on the right track. What else goes into the creation of a major league baseball player? Why don't they have very good major league baseball players in Britain? Because they don't have minor leagues, they don't have farm teams, there's this huge cost associated with getting the one brilliant best of all time player, so that's what he would have to say. Or if you think about the value of a Michelangelo painting it's not just the training of Michelangelo. It's all those failed painters that all those pubs wasted all that money investing in, right? That all goes in, in some indirect sense, into the wage differential. So that's sort of what--of course it is a stretch, and one thing that Marx did not think about was what we call in economics today, winner-take-all markets. Anyone know what a winner-take-all market is? What's a winner take all market? What is the phrase suggesting? What do you think a winner take all market could be? Think about the wage differential between the major leagues and minor leagues baseball. It's huge, right? That's a winner-take-all market. The way you can think about it is, before the advent of the gramophone record, being a really pretty good opera singer could--you'd do pretty well. Every major city would have an opera with a pretty good opera singer who'd make a good living. Once you have the gramophone record then everybody can buy Pavarotti, right? So then whether or not the difference between Pavarotti, and the next best, and the third best opera singer becomes much more important. So that's the idea of a winner-take-all market, and Marx had nothing to say about them. So he would have to reach for this rather contorted logic of saying, "It just costs much more to produce the best baseball player in the world because you have to run everything that goes into that production; not only his own practicing and training, but everything that goes with it." So that's the first point. Labor power is a commodity and its exchange value or its price is determined what it costs to produce it. But then this is the magic. This is the workmanship ideal on crack. This is the idea. He wants to say that living human labor power is a unique commodity in that its consumption as a use-value leads to the creation of fresh exchange value. What does this mean? It means that suppose I have some money and I spend a thousand dollars on a wonderful meal, good wine? At the end of that meal it's gone, right? I pay the check and it's gone. I've consumed it. Whereas, if I had spent that thousand dollars hiring one of you to paint one wall in my home when it's done my home's worth something more because it has a freshly painted wall. That's what he means, okay? So that living human labor-power differs from other commodities in that its consumption as a use-value leads to the creation of fresh exchange value, right? You eat the meal, it's gone, but you consume the labor-power of the person painting your house, or the person working in your factory and it's not gone, right? I mean, the labor-power's gone in the sense that it's been expended. Those calories have been spent, but you have something of greater value. You have a house that's worth more than it was before you hired that person to paint it, and that's what makes labor-power unique, right? He wants to say it is the only source of fresh value, okay? So a couple of just terms here to get straight. His archaic language--you probably read--he makes a distinction between what he calls constant capital and variable capital. All you need to think about there is capital is what the capitalist spends in the productive process. Variable capital is wages, and constant capital is everything else. Sometimes he uses these symbols: C means all capital, big C means all capital; little c means everything the capitalist spends on things other than wages; and V is spent on wages. And why he might call it variable? So the idea is that wages are always driven toward subsistence. This is an empirical assumption. Marx makes a few empirical assumptions. Of all the empirical assumptions he makes this is the one that actually is the most accurate. He wants to say there are always some unemployed in capitalist economies, and because there are always some unemployed in capitalist economies, wages will be driven towards subsistence. Because if you have, say, a union that organizes and drives up wages the employer will then go and hire nonunion workers and say, "Well, will you work for me at something lower than the union wage?" and the answer will be yes, because the unemployed worker won't have a choice, okay? So wages will be driven towards subsistence. Now that puts on the table another question. Well, what is subsistence? What is subsistence? And Marx does say, "Well, it has a social and historical element," so that is to say in a society like ours a definition of subsistence might include having enough money to buy a car in order to drive to work, whereas obviously that was not true in nineteenth-century Britain, so what counts as subsistence changes. And we will see there are other features of Marx's theory of capitalism which show that there's actually an impetus for what counts as subsistence to go up, but at any given time the accepted level of subsistence, and the wages will be driven toward that because of the existence of what he flamboyantly describes as the reserve army of the unemployed, right? That's what keeps wages at subsistence or close to it in a competitive economy. Okay, let's do a little micro transaction. Suppose there's a working day, okay? And suppose the accepted working day is ten hours long. Marx says, "Well think about what the worker is doing." The worker is producing cotton, let's say; it doesn't really matter what. And for some portion of that working day the cotton that he's producing, when eventually sold, is going to cover the cost of his wages. So the worker produces cotton all day long, and the capitalist eventually sells the cotton, right, and gets something for it, and then he's got to pay his worker his wage. And we're just going to say for the purposes of this example that the work that's done in the first four hours is going to cover the wage bill, and the rest is what Marx called surplus. So that's the difference between necessary and surplus labor time. And Marx has the idea of a rate of exploitation, which is the ratio of necessary to surplus labor time. You could even make a little index, and so you'd say it's one point five. Okay, so this is the value that's produced that covers the wage bill. This is the rest. Of course this isn't all profit, right? Why isn't it profit? Why isn't all that profit, somebody? Think about it. You're a capitalist. Yeah? Student: Because some money goes to the actual, I guess, building where they're working or even material needed to create the product. Professor Ian Shapiro: Exactly. The capitalist has to buy raw material, has to do advertising, has to have a system of managers, research and development, all of that has to be paid out of this. So profit is some subset of this, but it's not all of this, okay. Okay, so we're in the cotton business, and it's a competitive business, and so another cotton manufacturer says, "You know what? I think these workers are slacking. I'm going to say from now on to my workers, 'You work eleven hours.'" Wages are already at subsistence by assumption, right? So the workers might say, "No we're not," and they can say, "Goodbye, I'll somebody unemployed to do it then," okay? So if you do that we now have an eleven-hour working day. The rate of exploitation goes up, right, because now there are seven hours of surplus labor rather than six hours of surplus labor. Now you might say, "Well, the worker's going to get worn out sooner and die sooner," but let's just suspend disbelief about that, and say, in fact, so this capitalist is going to do better than this capitalist; so this capitalist is going to be able to sell his or product more cheaply, and therefore this capitalist is going to be in trouble. People think in Marxism, capitalists are afraid of the working class. No. On Marx's theory who is a capitalist afraid of? Who is the capitalist looking over his should at? Student: The other capitalist. Professor Ian Shapiro: Yeah, the capitalist down the street. He couldn't care less about--he's not worried about the workers. He's worried about the capitalist who can come into his industry and produce more effectively, okay? So if you can make these workers work an extra hour, you're going to do it. And then everybody else is going to have to do it, right, or they're going to go out of business. And this is why Marx wants to say, this is what he calls an increasing absolute surplus value. This is why Marx says, "Under primitive capitalism, in the early stages of capitalism, the battles are going to be over things like the length of the working day. If any of you've done nineteenth-century British history you will know that one of the big battles-- this starts with the chartists in the early nineteenth century and goes on. What are they pushing for? The Ten Hours Bill, they want the Ten Hours Bill. They want Parliament to come in and say, "Working day's ten hours," to limit this. So to the extent that the workers can organize politically and put pressure to get a Ten Hours Bill then you limit this. But in any case, Marx wants to say, "There are only so many hours in a day and people do have to sleep after all." So you're not going to get much out of this. The real secret to capitalism's dynamism is not the move from A to B. It's the move from A to C." Notice here we're back at a Ten Hours Day, but now, by assumption, I'm saying the capitalist is going to cover his wage bill for that worker in three hours rather than four. How is he going to do that? Yeah? Student: Reduce wages. Professor Ian Shapiro: Pardon? Student: Reduce wages. Professor Ian Shapiro: But you can't reduce wages because wages are already at subsistence. So you're going to have to, by assumption, you have to pay the same wages, so how are you going to do it? Student: Technology. Professor Ian Shapiro: Technology, exactly. The name of the game is technology. Somebody's going to go and buy a spinning jenny and put it in his factory, and then the workers will work way more productively, and as a result of that, the capitalist will cover his wage bill more quickly. And of course, the minute one capitalist put the spinning jenny in everybody has to put the spinning jenny in, right, or else they're going to go out of business. Because the capitalist that's covering his wage bill in three hours of that worker's work, rather than four, is going to be able to sell the cotton more cheaply in the market and therefore outcompete the capitalist who is still using the old system. And so the way capitalism works is that there's constant pressure for technological innovation, because this is not a very effective way of increasing productivity, right? In the early stages you'll see that, but there will be political limits, Ten Hours Bill, they'll be natural limits, human capacity, so then all the pressure is on dynamic innovation. Okay, now this is where it gets interesting, because what he wants to say is, "But we know that only living human labor power produces surplus value," so as you become more and more capital intensive, the capitalist is spending more and more on constant capital and less and less on variable capital. To put it in modern jargon, production's becoming more capital intensive. You're spending more and more of your investment on something that does not produce new value. So another way you can think about this, and remember I said to you last time, all the classical political economists thought they had to explain the declining tendency and the rate of profit. What Marx wants to say is, if this is the status quo, okay, and one capitalist in the cotton industry puts in the spinning jenny and this becomes the status quo, this capitalist will have higher profits than this capitalist, right? But once they've all put in the spinning jenny, the rate of profit in the cotton industry will be lower than it was up here," okay? So there's a short-run/long-run conflict of interest for capitalists because in the short-run at the margin I will innovate and my profits will go up, but then everybody will copy me and the rate of profit in the cotton industry will fall over time. Now, there are some dubious assumptions, which I'm going to come back to on Monday, but I first want to just draw your attention to some of the normative aspects of this. What Marx calls the rate of exploitation, as I said, is the ratio of surplus to necessary labor time, but now look down the right hand side here. He's got all this jargon but you can unpack it here. S over V is the rate of exploitation. All these things are basically equivalent ratio. But look down here, and now think about this. Suppose I was a capitalist and you were a worker working for me, okay? And suppose I said, "Times are tough in the cotton industry. You've got a choice. We can go to here or we can go to here. We can pick B or we can pick C, but A is no longer an option. A means I'm moving to Mexico, can't do A anymore. So take your pick B or C." How many are going to pick B? Hands up. How many are going to pick C? Everybody. But then it seems like this is very counter intuitive, doesn't it? Because you all went for something that, according to Marx, has a higher rate of exploitation, two point three-three as opposed to one point seven-five. How can that be? How can that make any sense? Okay, I'm going to go two minutes over here to leave you with this to think about, and then we'll pick up with it on Monday. This is the intuition. If you think about how people think about how well off they are the Pareto system says, "Am I on a higher or lower indifference curve than I was on before?" It's a bit like when Ronald Reagan was running for reelection in 1984, his slogan was, "Are you better off than you were four years ago," right? You look at your situation and you say, "What was my situation? Is it better? Is it worse?" a self-referential comparison. Marx says, "No, that's not the relevant comparison. The relevant comparison is other referential." You decide how well off you are not just by reference to what you get compared to with what you got before, but by reference to what others are getting. So that in this kind of situation if you say, "Which is worse for the worker?" they may well say that a situation in which they have more is worse than a situation in which they have less if the capitalist has even more. So Marx wants to say, "We're inherently other-referential. What's going to make us more or less happy is not just what we have, but what we have in relation to others." So now, and it turns out on this matter he was half-right. He was half-right, we see, from a hundred years of studies of sociology. What people find acceptable and desirable is connected to what others have, but usually it's connected, and this is where he was half-wrong, it's usually connected to what people who are similarly situated to themselves have. So workers in the steel industry will compare themselves to workers in the auto industry, but workers in the auto industry will not compare themselves to what executives in the auto industry get. So people are other-referential, but in a more horizontal sense. And this is true up and down the occupational scale. So if you have been a Yale administrator one thing you will learn is it will matter much more to a professor if he or she learns he's being paid two thousand dollars less than the professor down the corridor, than if he or she learns he or she is being paid two hundred thousand dollars less than the attorney who lives next door to them. People compare themselves to others, but they compare themselves to similarly-situated others. Why that is, is an immensely complicated and interesting subject, but it means that Marx was right to think we're basically other-referential. We don't just ask the question "Has my utility gone up or down?" We do compare ourselves to others, but he was mistaken in thinking that workers would compare themselves to capitalists rather than to other workers in deciding whether or not they were well off, and that would feed into his assumptions about the conditions under which workers would be likely to become militant. Okay, we'll start with that issue on Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
25_Democratic_Justice_Applications.txt
|
Prof: But what I wanted to do today was pursue the discussion of democratic justice. I talked about it as a semi-contextual idea that there's a general argument, which we discussed on Monday, that recognizes the subordinate character of democratic constraints on the superordinate goods that people pursue in different walks of life. And I said it was a semi-contextual argument with the implication that the way you work that out varies with time and circumstance. It varies at different times in the same setting, but then also it varies across settings. So if we think about the traditional family in America in the 1950s, which is more or less captured, I think, at least with respect to governing children in this picture, that's, of course, very different than the traditional family life we might find in South Africa in the 1980s. And where you go depends upon where you start, and the basic impulse behind democratic justice is a kind of democratized Burkeanism. That is to say it recognizes that along with the anti-Enlightenment theorists that we never design institutions afresh, rather we redesign institutions that we inherit and reproduce into the future. But rather than just reproduce them into the future in a conservative fashion, the impulse of democratic justice is to democratize them as we reproduce them into the future. And so when we think about restructuring the family, or restructuring the system of education, or restructuring any realm of collective activity, the idea is to take the inherited systems of norms and practices, imbue the values that are embedded in them, learn about them, but then not be uncritical. Think, rather, as we reproduce them into the future how they could be restructured in accordance with the basic impulse of democratic justice which is to democratize the power dimensions of those relationships while leaving the other dimensions as unsullied as possible. We want to democratize. Democracy is a subordinate good, but we do not want to interfere with the superordinate good any more than necessary. So that is a summary statement of what we did on Monday, but to put some flesh on it I thought I would walk you through a couple of examples of what this means when you start actually to apply it. And I'm going to talk today about two such examples. One concerns governing children and another concerns governing the workplace, and I'm going to walk you through the kind of reasoning that gets generated if we try to think about the exercise of democratizing these spheres of life within the constraints of democratic justice. Now I thought it would be good to start with governing children because on the face of it, it presents the most difficult challenge for the argument that I sketched on Monday. Namely, if you want to have some notion of collective self-governance, that is that people have a presumptive say when power is exercised over them, how is that going to work with children? And if you want to have some notion of opposition, how is that going to work with children? Because after all we don't want simply mindless opposition. We want loyal opposition. We want informed opposition, but young children are incapable of that. They're certainly capable of opposition, but not exactly the opposition that we're looking for. So if you want to say that democratic justice should apply in all domains where power is exercised this looks like a pretty hard case, and I think it presents unique challenges. For one thing, the hierarchies that exist over children are inevitable, and so we have to think about the opposition side of democratic justice in a way that takes that into account. On the governance side, too, we're dealing with a situation where it's impossible for children to make decisions about their own lives, certainly at the very beginning. Others are going to have to make those decisions for them. So on the governance side of the equation John Locke had this, I think, exactly right when he said that the only basis for disenfranchising people is necessity. So if somebody is unable to participate, then they should be deprived of the franchise, but only to the extent that necessity requires. So for Locke he said children should be disenfranchised for their "ignorant non-age" and once that expires they should be enfranchised. And he also thought that with respect adults we shouldn't disenfranchise people until absolutely necessary. Perhaps we find it necessary to take away people's driving licenses once they turn 85 or at least start subjecting them to annual driving tests, but that doesn't necessarily mean we would take away the vote from them. So the basic Lockean impulse is only to disenfranchise when necessary and no more than is required by circumstances. Now that too presents something of a challenge when you start to think about children because if you read some books on the history of childhood one of the things you will quickly discover is that over the past hundred years or so, there's been a huge lengthening of childhood. We have invented adolescence. Adolescence is not a category that existed a hundred years ago. Children were thought to be miniature adults at a much earlier stage. So if we have lengthened childhood, lengthened the time at which people are disenfranchised, after all the incidence of adulthood begin to accrue around sixteen and seventeen when people get driving license and so on. They can vote now at eighteen, but we still prohibit the imbibing of alcohol until twenty-one. So it's a gradual exodus to adulthood and it's an elongated status. Now there might be justifications for that. When you move from an economy that's based on physical brawn to an information-based economy that's based on education, it may be necessary to lengthen childhood because it may be necessary for people to acquire different kinds of skills that were not needed in the eighteenth or nineteenth century. So there may be a justification for lengthening childhood and I'll come back to that, but it needs to be supplied. The presumption is that you only disenfranchise out of necessity. Now there have been movements from time to time to treat children as miniature adults from the earliest possible age. There's a famous British school called Summerhill founded by a man called A.S. Neill who died recently. And Summerhill was famous for not having any rules. And the kids could do whatever they wanted. They could go to class or not go to class. They could go to bed whenever they wanted. They could come and go. There simply were no rules. And Neill used this as an example. It was always held up as a banner by the Children's Rights Movement as an example of what could be done, and the implicit argument against authority who structures over children from a very young age. But I think it was rather artificial, because after all Summerhill was a private school where the well-to-do could send their children. In England it's euphemistically called a public school, which means a private school, and it was a cocooned environment where children could make mistakes without suffering horrendous consequences of those mistakes. Whereas if you go down to the Hill district of New Haven and you imagine eleven- and twelve-year-old children making the sorts of mistakes that you could make in Summerhill School without consequence for those kids in the Hill district of New Haven the results would be life-changingly catastrophic and often are. So the Summerhill story is, I think, in that sense na�ve about the larger context of power relations within which education takes place. And as we'll see even more explicitly when I come to talk about democracy in the workplace you always have to look both at the system of power relations within an institution, but then also the larger power context, the power externalities as I call them, within which the institution operates. And so I think the Lockean impulse to have a gradual transition to adulthood makes sense and it should be bounded by necessity, which creates the presumption that if we're going to lengthen this period of dependency, the justification has to be supplied. So that's on the governance side of the equation. What about on the opposition side of the equation? Because children don't have the capacity to engage in what we're thinking of opposition. That is opposition where you've internalized and understood the values of the system into which you're born and now you are questioning whether they should be applied into the future. There's not going to be much opposition either, of that sort, until children are fairly advanced. So there's going to be a very long period in which neither collective self-governance nor loyal opposition is going to be in any meaningful sense feasible. Children are going to be stuck in hierarchical situations and that is the basic material with which we have to work. How can we think about that? Well, one argument that comes to mind once we're considering inevitably hierarchical situations, is that perhaps hierarchies can check one another. If you have an inevitably hierarchical relationship and you think about more than one hierarchy-- this is, if you like, another version of ambition counteracting ambition. You create more than one authority structure over children with the expectation that they can then to some degree have competing and to some degree complementary competencies and will check one another, and that's the basic approach that I take in thinking about democratic justice as it applies to children. We can think of children, first of all, as having two kinds of interests, which I call basic interests and best interests. So basic interests are rather like Rawls's resources. In that sense my argument is a resourcist argument comparable to Rawls's and Amartya Sen's, which we didn't have time to talk about, and Ronald Dworkin's, which I mentioned to you briefly. And if you go back to your notes from our lecture on Rawls, you'll see that I said one of his most important innovations was to take the focus off welfare, therefore different measures of utility, and start, instead, talking about resources. In his case they were primary goods. So I talk about basic resources or basic interests as those things which it's necessary to vindicate in order for the person to survive and thrive in the economy as it's likely to exist for their lifetime and in the political system governed as a democracy. Children have an interest in their basic interests being met and others have an interest in the basic interests of children being met. We all have an interest in the raising of literate competent people so that they can participate in the democratic political order in which we're all going to operate. And then best interests are something quite different. The best interests of children are to thrive as well as possible, to be all they can be, to be happy, fulfilled, successful human beings, loved and capable of love, and all of the other things that we think of that it's important for children to develop. And the argument is that we should think of the state as the ultimate fiduciary of children's basic interests, and parents as the ultimate fiduciary of children's best interests. This raises the question, what is a fiduciary? Again, this comes from Locke when thinking about children, that Locke makes the argument when he's discussing children, he says--well, I'll put it into twentieth century words or twenty-first century words, but the basic idea is, what do you do if you raise your child, you do your best for them, you pour resources into them, you pour energy and affection into them, you do everything right and they turn around at the age of eighteen and they say, your kid says to you, "Dad I think you're a schmuck. I'm out of here. I don't owe you anything." And you start slamming your fist on the table and you say, "After all I've done for you...." Locke wants to say this argument doesn't fly. Why doesn't it fly? Why doesn't it fly? "After all I've done for you," what's wrong with that argument as a basis for thinking about what children owe their parents? You could take this from a Lockean point of view or a Rawlsian point of view. Think. Why is it a terrible argument? I've done these things for you now you owe it to me to go to law school or whatever it is, support me in my old age. Any guesses? Yeah? Student: Well, the child has no choice to participate in the power structure that it's been in for the past eighteen or so years. Professor Ian Shapiro: Exactly, so and for Locke obligation is based on consent, social contract theory, consent. Yeah, so you've got it right. That's Locke's reason and so he says, "If your child at the age of eighteen says, 'You're a schmuck. I'm out of here. I don't owe you anything,' all you can really do is conclude that you failed as a parent." The child doesn't owe you anything because-- and as I said, in Rawlsian terms the fact that this child is your child is, from the child's point of view, morally arbitrary. They didn't do anything to become your child. So your power and authority over the child is the power or authority over the fiduciary, but it's in the nature of a fiduciary arrangement that the child doesn't owe the fiduciary anything. You elected to have that child and you internalized the risk that that might happen when the child turned eighteen. So Locke says, "You're basically out of luck. What you can do is threaten to disinherit them but that's about it." And indeed, I think that might be the best argument for allowing inherited wealth ever devised that there's no other way for parents to control their children. But in any event, short of that you're out of luck. So that's the notion of a fiduciary relationship. The charge does not owe anything to the fiduciary, and the fiduciary relationship persists only as long as the charge is incapable of exercising that authority for themselves. So why think of the state as having fiduciary authority over basic interests and the parents having fiduciary authority over best interests? Well, one reason going in is what I've already mentioned that once you think of an inevitably hierarchical situation to some degree you want to have hierarchies checking one another, ambition counteracting ambition. But it's not just that. It's more than that. If you think about basic interests these have to do with survival, basic medical care, and so on, whereas best interests have to do with the sorts of things it would be absurd to think the state would be any good at: knowing your child, caring deeply about your child, wanting your child to survive, having what we called earlier interdependent utilities that are connected to your child's welfare and thriving. Those are not things that can come from government officials almost by definition because they involve things like care, and affection, and all of the things that make for the superabundant good of rearing a happy child to have been realized. And indeed, when you think about what is it that we think of as tragic when a child winds up being raised in an orphanage is precisely that that is missing. They're being raised by people who do not care passionately and intimately about them. So you can see why it would make obvious and intuitive sense to think of parents as the guardians of, fiduciary guardians of children's best interests. Why should the state be the fiduciary guardian of the basic interests, apart from the fact that we don't want to have one hierarchy? Well, there are certain kinds of things parents might choose to do out of convictions about children's best interests that have an impact on their capacity to survive and thrive as members of a democratic polity and as able to function in the economic system as it's likely to exist for their lifetime. The Amish chose, in a famous case Wisconsin versus Yoder they wanted to keep their kids out of school after age fourteen even though the state of Wisconsin, through its democratic mechanisms had decided that you really need to go to the school to the age of age sixteen in order to learn the skills necessary to participate in the political order and in the economy as it's likely to exist for your lifetime. The Amish didn't dispute that, but what they said was, "We have learned from experience that if we allow our kids to go to school after age fourteen, the odds that they will leave the Amish community go way up." Well, that's not a good argument for allowing them to prevent their kids getting an education. And so I think the Supreme Court made the wrong choice when it agreed with the parents in that case. Or think of a different one, where Christian Scientists are of the view that a child in need of a blood transfusion shouldn't get it because of their religious beliefs. And this is a case, again, where I would argue, eventually if you can't persuade them the parents should lose, the state has a fiduciary obligation to the child to allow it to survive and not suffer the consequences of not having a lifesaving blood transfusion. On the other hand, there are other cases we could talk about. For instance, parents might object and have objected to certain kinds of books being used in the schools, where the goal is to promote literacy, on the grounds that they object to the moral message within the books. And they go to court and they challenge the use of certain kinds of books that convey moral messages that the parents find objectionable. And in cases like that, the burden should fall to the school district to show that there isn't some other way they could meet their obligation to teach literacy that didn't infringe on the parents' best interests, conception of the child's best interest which includes their moral education. So in those kinds of cases the parents should win. There will sometimes be murky cases. There will sometimes be disagreements about whether or not the state is vindicating the basic interests in a way that's as unobtrusive as possible as the parent's capacity to meet the best interest. And I think you have to create procedural mechanisms in which those disagreements are played out, and much of that chapter that I had you read for today is concerned with what those procedural mechanisms might be. It's also the case that it might change over time. So, for example, we might initially think that things like sex education belong in the area of best interest because this, after all, has to do with the system of morality that a parent wants to communicate to their child, and we have deep pluralism of values. Different parents have different value systems and that's understood, and therefore we don't want one moral code to be forced on all children about matters of sexual morality, and that could be the status quo for a long time. But then along comes AIDS and suddenly sexual morality becomes a public health matter, and so it then may recast how we would think about the division of fiduciary authority between parents and the state over the sexual education of children. And because it becomes a public health issue it may be the case that one would insist on certain kinds of sex education going on in the schools, although it would have to be designed to be as unobtrusive as possible of the parent's conception of the children's best interest. And they'll tussle about that, they'll disagree about it, they'll go to court, but that is a healthy tussle from the point of view of the larger conception of democratic justice. Because really what you want to do is have a system in which both hierarchical orders, namely the parental fiduciary order and the public fiduciary order are to some extent held accountable and checked by the claims of one another. And so this is one institutional model, and it's actually not that different from the one that exists in the U.S. today when you think about it, that the state does reserve authority over children's basic interests, and when parents become physically abusive or endanger the life of a child, we will step in and do something. But short of that, for many very good reasons, we leave the parent to decide the best interest of the child immune from interference by the state, and we try to create mechanisms that limit the ways in which fiduciary interests of the state are vindicated so as to allow the freest possible hand. Of course there are huge areas of controversy with this because often the state will delegate its authority over basic interest to parents, and so we'll allow private schools and that sort of thing, and then we have to manage the resultant knock-on effects of that when the school system fails, or it teaches things that are at fundamental odds with the principles of a democratic society, and those sorts of tensions are inevitable once you have a dual-regime system. And you need to create mechanisms whether it's courts or systems of administrative appeal, or whatever it might be in which these parties can check one another and hold them to account. Because the state has so much more power, I argue the burden should often be adjusted accordingly, and it should be easier for parents to challenge what public officials do than the other way around. And so you get into fine nuances of institutional design that are intended to accommodate the basic power disparities. But so that is, if you like, an adapted Lockean view of how one would think about governance over children that is modified by the principles of democratic justice. Let's switch and talk about employment relations, very different world. We're now dealing with competent adults who are presumed to know their own interests. We don't need to have fiduciary arrangements and paternalistic judgments at all, and we can imagine a system in which everybody takes care of their own interests. Now, if you think about the governance of the firm, as I said, firms are hierarchical organizations in which employees have to do what employers tell them. This is generally thought to be efficient. There are debates about whether firms are too hierarchical. If they become too hierarchical, they may become less efficient. And indeed, in the 1960s and '70s there was a movement trying to argue that democratically-run firms would be more efficient than hierarchically ordered firms. Huge literature on that, but the best book on this subject was written by Henry Hansmann, now in the Yale Law School, a book which I commend to you all called The Ownership of Enterprise, which I discuss some in that chapter on democracy in the workplace. And Hansmann pointed out that if democratically-run firms were more efficient, we would have seen them emerge all over the economy. If democratically-run firms were more efficient they would have outcompeted the hierarchical firms. And indeed in the '60s and '70s, part of the reason people made those arguments was they observed that Japanese firms were much less hierarchical than American firms, and also they thought they were much more efficient. There were all these studies showing Japanese car firms had three layers of management whereas American car firms had seven layers of management, and Japanese car firms were more efficient. That didn't look so good after the 1990s rolled in and Japanese firms turned out not to be very efficient at all. There were similar arguments about what appeared to be democratic firms in a part of Italy called Mondragone that was thought to be very efficient, but subsequent studies showed that actually these firms were not very democratically-run at all and they were basically controlled by the banks that controlled their capital. And so it was rather na�ve to expect democratic firms to be efficient. And indeed, as Henry Hansmann points out, you only really find that democratic firms are efficient in circumstances where everybody can do every job like law firms, or taxicab companies, or plywood co-ops in the northwestern US, where everybody has an interchangeable role. Then democracy doesn't come at any cost to efficiency. This is a combination of the ancient ruling and being ruled in turn. If we can all do everything we can be egalitarian about it. And the Buchanan and Tullock observation that I talked to you about earlier, namely when we have disagreements and you create a lot of procedure that will be time consuming precisely because we have disagreements. We have different interests. So Hansmann says, "In large firms or firms where you have old workers and young workers, they'll have very different interests on things like pensions. Young workers will want one thing. Old workers will want something else. And if you start to have a lot of democratic procedure in those kinds of circumstances it's going to come at an efficiency cost because they're going to be sitting in meetings all day arguing about their pension benefits and not working on the assembly line. So it's not surprising," Hansmann says, "that in most industries, in fact, you do not get democratic firms for efficiency reasons." Efficiency is the superordinate good. Firms are there to produce goods and services that make money, and if you don't do it more efficiently than the next person you're going to go out of business. So from the point of view of pure efficiency it seems like you're not going to get a lot of democracy in the firm. That's the cold harsh reality from the perspective of 2010. And the literature advocating democratic firms has this kind of kumbaya quality that is difficult to take seriously in the contemporary world. Firms are going to be hierarchical. Then we have the problem, well, so how should we think about the hierarchies within firms, given the fact that they're sites of greatly unequal distribution of power? So different players have different options. For instance, if you, and this is where I would disagree with Hansmann, if you're a shareholder in a firm and you don't like what the firm is doing, you have a very easy option. You can just sell your shares and buy shares in another firm. No problem there. And so Hansmann says we should think of shareholders as enforcing democratic accountability in firms. I don't think that's a very plausible way to go because shareholders have very low exit costs. If you don't like the way a firm is run you buy shares in a different firm. They're not going to exercise much democratic control of what goes on within the firm. Think about employees in a firm. Well, it depends a lot on the situation. So just to fix intuitions consider this. Why do I talk about a surfer's paradise? The Belgian political philosopher Philippe Van Parijs wrote a book called Real Freedom for All in which he said-- it's a kind of post-Rawlsian book about justice. Van Parijs, he runs something, by the way, called the Basic Income Network, which I urge you to take a look at his website. It's quite interesting. He basically says, "Everybody should be paid the highest sustainable wage regardless of work." This is sort of Sweden on crack or something. Everybody should be paid the highest. Even surfers should be paid. That's Van Parijs' view. In European parlance, it's called the social wage. They should have the highest social wage possible. So imagine that that a surfer's paradise is the utopia Van Parijs defends in that book Real Freedom for All. Think at the other end of a continuum what we might call a Dickensian nightmare. A world in which there's no health insurance, no unemployment insurance, no welfare state, no social security, nothing. So obviously every actual economy exists somewhere on this continuum from the Dickensian nightmare to the surfer's paradise. And so what I want to say is, "Well, that affects a lot of what goes on within the firm." Because if we're living in a Dickensian nightmare--go back to the example I mentioned last time. Think of an employer who's in a hierarchical relationship with a secretary and says to the secretary, "Unless you go to bed with me I'm going to make sure you get fired." So now an employer is abusing the position in the hierarchy. Well, if this is a Dickensian nightmare, the secretary's going to be terrified because the costs of losing that job are enormous. They're going to lose their health insurance. There's no unemployment et cetera, et cetera, et cetera. You're going to be thrown out into a pretty horrible cauldron. If you're living in a surfer's paradise the secretary can walk away at much lower cost to themselves. And I'm not talking about the question whether the secretary is actually fired, or actually walks away, but simply the knowledge both parties have that in the Dickensian nightmare the exit cost for the secretary are enormous, whereas in the surfer's paradise the exit costs for the secretary are very low. That structures the power relations within the firm. So the basic intuition here is when we ask the question, "How much should the state regulate what goes on in firms? The answer is, "Well, it depends." It depends upon where you are on this continuum because if you're closer to a surfer's paradise there's less reason for the state to create lots of protections for workers within the firm which are going to come at an efficiency cost by assumption. Appeals, processes, defense of union rights, grievance procedures, burden shifting to employers when there are disputes. All of that comes at an efficiency cost. But so the notion is that if it's a Dickensian nightmare, the worker's basic interests are threatened, then democratic justice would not say the worker should internalize those costs, rather the employer should. But as you move towards the surfer's paradise then there's more reason to say that the worker can internalize some of the costs of hierarchy in the firm because the costs of leaving are comparatively smaller. And so you have, as I said, a semi-contextual argument. What this regime should be like depends on the power externalities within which the power internalities of the firm actually take place. And one of the desirable features of this I think is that it leads you to rethink relationship between capital and labor. Marx thought that's the basic conflict, workers on one side capital on the other. But now if you think, say, about something like health insurance. We have a system now where health insurance is provided through employment, which moves you more into the Dickensian nightmare direction because you lose your job you lose your health insurance. And employers compete with one another by offering-- you know, Yale says, "We offer better health benefits than working for whatever it is, Harvard or Columbia." So employers compete with one another, but the truth is all of the employers would be better off if they didn't have to compete and everybody had health insurance. So there's a kind of collective action problem. What looks like, especially once we get into strikes and bargaining disputes between employees and employers, what looks like a conflict between capital and labor can be re-conceptualized as, really, it's a collective action problem among firms. They all compete at the margin over things like benefits, but in fact they would be better off if health benefits were taken off the table entirely and funded through the tax system. So what looks like a conflict between capital and labor is actually a collective action problem among firms. Another advantage of this, I think, is that it does not put the best in conflict with the good in that to the extent you have a regime like this you give firms the incentive to support the expansion of the social wage because what do capitalists want? What do managers want most? They want flexibility at the plant level. They want to be able to turn on a dime, and layoff people, and do something differently, and compete in the twenty-first century in a totally new way than they were competing two years before because that's what you need. The name of the game is flexibility, adaptability, nimbleness, all that stuff. That's what firms want. That's what they need. And if they know that the closer you get to a Dickensian nightmare the more cumbersome regulation they're going to have, and the more you move toward a surfer's paradise the less you're going to have, then they have an incentive to try and move the society toward the surfer's paradise. So corporations will get behind expansions of health insurance and so on. And there's a certain realism to that because if you go and study the history of the expansions of the welfare state despite what you might read in the newspapers what you will find is it never happens unless business gets behind it. The big expansions of the social wage, the big expansions of the welfare state even in countries like Sweden have only occurred in circumstances where business gets behind it and supports it. So you want an industrial management regime that distributes the incentives in such a way that they will do it. So those are two examples of how we can think about democratizing the power dimensions of social institutions while interfering with the superordinate goods as little as possible. And that's the basic impulse of the argument of democratic justice. Now if you take a step back and you think about democracy over the past couple of centuries, you know, I talked to you some about Alexis de Tocqueville. He made two points about democracy. He had a love-hate relationship with it. I told you he was in many ways a critic of it, but he did say two things about it, one that it was inevitable. "The gradual progress of democracy," he said in the preface to the 1848 edition of Democracy in America, "is something fated. It's providential. It can't be stopped." And the other thing he said is that, "The thing about democracy is it has wild instincts. What we need to do is try and domesticate it, not get rid of it. We can't get rid of it. We can't stop it so we must domesticate it." Well, I think both of those claims were partly correct. It is true Tocqueville was correct to think that democracy is an idea with world-historical force. It will always be appealed to by people who experience injustice, and they will always demand democratization if they find their social circumstances unsatisfactory. So I think this notion is partly right. But even though democracy is an idea with world-historical force, I think he was partly wrong to say that the progress of democracy is inevitable. There have always been advances for democracy and then setbacks. We had the French Revolution. We had a disaster. In 1830 we had a democratic revolution sweep Europe. By 1832 the monarchies had been restored. 1848, again, we had democratic revolutions sweep Europe, and by 1851 they had been undone. 1989, we had democratic revolutions, huge new wave of democracy, but some of those democracies came under real threat in places like-- fell apart in Algeria in 1991. We know what happened in Rwanda, although things have turned around again since. Or if we look at Pakistan in the last three or four years. We see that we shouldn't think of democracy as inevitable, this sort of Francis Fukuyama idea that history is intending in some direction. To the extent you think democracy is valuable really what you have to do is work to promote it and sustain it. Likewise when we talk about Tocqueville's second observation that democracy has wild instincts and it needs to be reined in. We need to, as he put it, "educate democracy to limit its wild instincts," and make sure that it doesn't destroy other good things, superordinate goods. In his case, of course, freedom was the most important good that he didn't want democracy to compromise. But I think the same argument applies with respect to efficiency, with respect to many of the superordinate goods that people strive after and cherish. So we always have to be wary of the capacity of democracy to undermine other goods. But, and this is where I think Tocqueville overstated the case, that does not mean we should overlook democracy's capacity to undermine bad things in the world. To the extent we can democratize power relations without interfering with superordinate goods, we can have a tremendous effect for good whether it's the abolition of slavery, whether it's the limiting of exploitation in the workplace, whether it's the protection or the restructuring of the law of marriage to prevent the domination of women. These are all areas in which the basic democratic impulse to democratize social relations has improved them. And so it doesn't mean it'll always be successful. Sometimes it'll fail. It requires creative ingenuity, the capacity to try things out and change them when they don't work, but the basic impulse of democratic justice is to take inherited institutions, democratize them as we reproduce them into the future, leaving a better world than we found. Thank you very much.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
11_Marxian_Exploitation_and_Distributive_Justice.txt
|
Prof: Okay, we're going to pick up where we left off with talking about Marx last time. I was just talking to some of the teaching fellows about how the sections went, and it seemed like there was some confusion or lack of, what's the word, lack of comfort with this concept of exploitation in Marx. Just where does it come from and how does exploitation occur when people are making voluntary Pareto exchanges? And I think that is the right question to zero in on, and we're going to spend more time on it both today and next time, because it really is important for Marx's argument to work that exploitation is not about cheating people. It's not about getting people to do things involuntarily. The idea behind the concept of exploitation is equivalent is exchanged for equivalent. Use-values are voluntarily exchanged. So another way of putting it, if you like, is that the transaction between the employer and the worker is a Pareto superior transaction, and Marx is not claiming that it isn't. So then you might say, "Well, what is this idea of exploitation?" And we saw that in some ways it's quite counterintuitive because when we worked through this, the only way in which the capitalist can increase his productivity is by either increases in what Marx called absolute surplus value, which is basically lengthening the working day, getting people to work harder. There are obviously physical limits to that and political limits. Workers are going to organize. They're going to get the Ten Hours Bill passed through Parliament, as they indeed did. And so then you get the other approach, what he called increases in relative surplus value, which is technological innovation, so that the capitalist can cover his wage bill in less time. And so the necessary labor time goes from four hours to three, and there's seven hours of surplus labor time where the workers producing goods, that when sold, produce value that is not accrued to them, okay? We saw there was something counterintuitive here, and we're going to come back to that, in that most of you figured you would choose three over two if you had to take one of them, but that's actually a higher rate of exploitation. And I started talking about the assumptions about how people compare themselves. Remember the self-referential versus other referential, and we'll come back to all of that. But the point I want to make first, and to get you just clearly to understand, is that this kind of move, increasing productivity by technological innovation, Marx and modern economists all agree is the thing that makes capitalism dynamically productive. Because it's the pressure to innovate that is created by capitalist competition that leads to technological innovation and more and more capital-intensive production. Because if you think about it, the more you're spending on technology to make your labor more productive the less you're spending on your wage bill, and it's only living human labor-power that creates fresh value according to Marx. And so when we think about this line moving this way we're thinking about production becoming ever more capital intensive, right? More and more spent on technology to make the workers more productive, and we should expect capital intensity to increase over time. When you see Marx use phrases like "the organic composition of capital increases," he's just saying that production becomes steadily more capital intensive. Now you could say, "Well, why is this exploitation?" Marx wants to say--this is, I think, at the end of the day not a very plausible argument for reasons we'll see on Wednesday. He wants to say this is not a moral argument. This is not a normative argument. He's using it as a technical term. And so the fact that people don't feel more exploited in this circumstance than in this is neither here nor there. It's true in the class-in-itself sense that I talked about last time, that they, objectively speaking, are more exploited. He has an argument about why they will eventually come to feel more exploited that we'll attend to a little bit later today, but for right now he just wants to say this is an objective argument. Now you could say, "Well, so is the rate of exploitation the rate of profit?" And the answer is no. It's not the rate of profit because the value that's created here, the so-called variable capital, which just covers the wage bill, is separated out from capital that is spent on everything else. So it's true that the capitalist's profit comes out of here, but so do rent, research and development costs, advertising, mail, you name it. So profit is some subset of this but not the totality of it. So the rate of profit is not the same thing as the rate of surplus value. And the rate of profit is also not the same as the rate of exploitation. But what he is assuming in the concept of exploitation is that the worker is producing some value that ultimately accrues to the capitalist. Now I said to you, when we started out talking about Marx, that just like the neoclassical theorists he has a micro theory and a macro theory. You now know what the micro theory is. It's this story here. You might think about it. This diagram is to Marx what the Pareto diagram is to the Pareto system. This is the micro story of exploitation in the same way that the Pareto system is the micro theory of market exchanges. But at the end of the day, of course, there's a macro theory as well. As I said, Marx, like Adam Smith and David Ricardo before him, had an invisible hand theory. And I think the easiest way to get a grip on his macro theory is to talk about the five sources of crisis in capitalist systems that he thought followed from his analytical logic. And I'm just going to spell them all out now. None of them is without its problems, and we'll come back to the problems later today and mostly on Wednesday, but I think it's better first to see the argument in its totality before we start dismembering it and see what survives and what doesn't survive closer critical scrutiny. So in the first instance one of the possible sources of crisis is the existence of money. Now money for Marx is a commodity like anything else. This is, of course, before the era of paper money and anything like that. So money is gold or silver or something like that, and it's a commodity like any other commodity. Its value is determined by the cost of producing it. So gold is valuable because it takes a lot of work to dig it up, basically, and to refine it. Obviously for a commodity to function as money it's got to have certain properties. It's got to be easily divisible. If we used sheep it'd be kind of hard because if you want to use one leg of a sheep to represent three bottles of wine you'd have to kill the sheep, wouldn't be very convenient. So money has to be easily divisible and not atrophy and things like that. Those are its properties as a use-value, if you like, but as an exchangeable commodity it's like anything else. But once you get a developed market system it depends on money to function, and there's the ever-present possibility of liquidity crises if people start hoarding money. And he thinks that in times of anxiety and so on people will start to hoard money. We saw this, if you remember, during the financial crisis last year where suddenly banks that had trusted one another for decades and even centuries stopped trusting one another and wouldn't lend money to one another in ways that they regularly did because they didn't know whether the other party had the money to lend. And there was a period of several weeks where the entire financial system was in danger of crashing simply from the liquidity crisis that sort of was perched on top of the credit crisis. So once the system becomes completely dependent on money as a medium of exchange there's the ever-present possibility that people will hoard money and the system will go into a crisis. Secondly, Marx thinks he's now understood, better than Smith and Ricardo before him, just why it is that there's a declining tendency in the rate of profit in capitalist systems. And the answer is in the logic of the system you've already worked your way through. So if you go back here, as we move down here to make the system more capital intensive, any given move increases profitability for the person who makes it. So if this is the status quo, and it's a cotton factory and the capitalist puts in a spinning jenny and makes his wage bill more quickly than anybody else, that leads him to be able to undercut his competitor, and so his profits will go up. But all of the competitors will then make exactly the same innovation themselves. And then once you have spinning-jennies throughout the cotton industry the claim is that the rate of profit in the cotton industry will be lower than before anybody had put in a spinning jenny, okay? So the idea is, what you do at the margin to improve your profits as a capitalist, leads to the declining rate of profit in the industry as a whole, because as we become more and more capital intensive in the production of cotton we have less and less living human labor-power going into the production of cotton, and it's only living human labor-power that produces fresh value. Now you might say, "But doesn't the spinning jenny produce fresh value?" Marx's argument is no. The value of the spinning jenny was determined by the labor to produce the spinning jenny, and its value is transferred to the product, but that's all. It doesn't produce new value. It's only the living human labor-power. And the way you need to think about it is that that is less, and less, and less, of the capitalist's investment as a proportion of his total investment. So living human labor-power, or variable capital, as Marx calls it, is a diminishing proportion of what the capitalist has to invest in order to remain competitive. And so over time profits are going to diminish in very competitive industries because they'll get more and more productive. And that's what we mean in the modern world when we say there's very little margin in a particular industry. If you want to go into making something like salad dressing, a very saturated market where every gizmo has been tried, you're going to find miniscule margins; very, very hard. And Marx thinks this is a process going on economy-wide, and so that's why economy-wide, the rate of profit gradually declines in capitalist systems. And as I said, that's something that all of the classical economists thought that they had observed, and so any theory worth its name was going to have to account for it. So that's the second source of crisis, though, because as it becomes harder and harder for capitalists to make a profit many of them are going to go out of business. And that brings me to the third source of crises in capitalist systems, and it's basically that competition eliminates competitors. So that the initial market model he starts with is a perfectly competitive one, but what he's saying is, of course, over time people go out of business at the same time as things are becoming more and more and more capital intensive-- another way of putting it is the entry costs become very high. For instance, in 1984 when the Challenger blew up there was a company called Morton Thiokol that made the O-rings for the Challenger on the rocket boosters. It turned out that the design was faulty. Now you might say, "Well, that was probably the end of Morton Thiokol's relationship with NASA. Having made such a terrible error they would go to a competitor." But guess what? Morton Thiokol is still making rocket boosters for NASA. Because the notion that I would say to myself, "Hmm, looks like a good business for me to get into. They're obviously no good at it. I'll go start making rocket boosters for NASA," is laughable because the entry cost is so high. So that's an example, pretty much, of a monopolistic industry. So competition leads to innovation, yes, but it also leads to the elimination of competitors as production becomes more and more capital intensive. And once you don't have competitors you don't have a reason to innovate, and the basic dynamic of capitalism starts to slow down because monopoly capitalism doesn't have the same inbuilt incentives for innovation. Remember, the capitalist is not afraid of the worker in Marx's story. The capitalist is afraid of the capitalist down the street, and once there is no capitalist down the street about to take his business away or her business away then this competitive dynamic starts to atrophy. So that's a third source of crisis as far as he is concerned. A fourth source of crisis that Marx identifies, and here he's pretty much actually following Adam Smith, the way you can think about it most straightforwardly is the workers collectively, if they pooled all of their wages, couldn't buy what they produce. The workers collectively can't buy what they produce, and that being the case there's going to be an endemic problem of weak demand in capitalist economies. There's not going to be enough demand to satisfy. There's not going to be enough demand to satisfy the needs of the system in order for everything that's produced to get sold. Big problem. You could say, "Well, doesn't that just mean wages will rise," and indeed it does put some upward pressure on wages, but as we said, wages are basically going to be held around subsistence by unemployment, and so there's not much scope for that, right? Because if wages start to rise you're then going to have the problem that any capitalist who's not in the system, or would be capitalist, or want to be capitalist is going to come and hire unemployed workers to work for him or her and so drive wages down. So this is a problem. It's why Adam Smith thought you got imperialism, and indeed Marx and Lenin after him all thought one of the reasons you get imperialism is to find markets for the goods that are produced. There were other reasons too, cheap resources and so on, but one of the reasons is the search for new markets, more demand, but that's a process which has obvious limits because eventually your empire is going to bump into the French empire, and the Belgian empire, and the German empire, and you're going to fill up the world, and then what do you do? So you can stave it off with imperialism, but the basic underlying problem is that the workers collectively cannot buy what they produce. Now we see this sort of dynamic plays out in some ways during recessions when there's not enough demand in the economy, and of course there are things you can do about it. You can do what we did in The Great Depression. The government can borrow money and basically give it to unemployed people by hiring them to do public works, and then they spend it and stimulate demand, or they can try what we've been trying to do in the last year and a half. They can try and get the banks to give credit to people to buy things and stimulate demand. But it's a basic structural problem, Marx thinks, built into the system by the very fact that the workers collectively cannot purchase everything that they produce. Now you could say, "Well, these are all sort of tendencies, but there are counter tendencies as well, and these problems can be staved off," imperialism being an example of staving off, the one I've just discussed. And Marx is not particularly clear on just which one of these problems is supposed to be decisive. Rather, what he thinks is they'll all sort of kick in and start to make the system creak at the joints and not work very well. And as that starts to happen, workers will start to get mad because they will start to discover that the system isn't giving them any particular benefits; the famous line towards the end of The Communist Manifesto that you all read: "Workers will come to realize that they have nothing to lose but their chains." The system that has given them certain benefits will give them fewer and fewer benefits as competition becomes more cutthroat. Their employers are working with narrower and narrower margins. They're unable to buy the luxury goods that they see all around them, and so they'll start to become angry, and that leads us to the discussion we had last time where we see that Marx wants to say that their declining relative share of the total product is eventually going to make them militant. And so the image is of a system becoming less and less dynamic, a smaller and smaller number of monopolies in every industry, capitalists who are going out of business falling down into the working class bringing all their resentments with them, and finally the system starts to reach the point where it just can't function, and that's when he thinks the socialist revolution becomes both possible and necessary. And so you will get some sort of revolutionary moment, and the workers will take over the state, and the small remaining number of monopoly capitalists will be put out of business, the means of production will be nationalized and then socialism will exist. So that's the story that he's telling. Now I want to zero in on a few features of socialism and communism before we start, and then we'll come back and talk about the difficulties with this argument. Socialism for Marx is not an equilibrium any more than capitalism is. Socialism is not a system that is without its own internal contradictions according to Marx. And the reason is as he says here, What we have to deal with is a communist society, not as it has developed on its own foundations, but, on the contrary, just as it emerges from a capitalist society; which is thus in every respect, economically, morally, and intellectually, still stamped with the birthmarks of the old society from whose womb it emerges. Accordingly, (now he's talking about after a socialist revolution) the individual producer receives back from society-- after the deductions have been made-- exactly what he gives to it. So another way you might put this is workmanship is no longer hypocritical. Under socialism the worker really gets back what he's put in. "He receives a certificate from society that he's furnished such-and-such an amount of labor (after deducting his labor for the common funds); and with this certificate, he draws from the social stock of means of consumption as much as the same amount of labor cost." So you have what he describes here as the rights that workers get to what they've produced, he says, it's "still in principle-- bourgeois right, although principle and practice are no longer at loggerheads, while the exchange of equivalents in commodity exchange exists only on the average and not in the individual case." Don't worry about that last phrase, but what he's saying here is that principle and practice are no longer at loggerheads. What he means is the worker is being recompensed for his work, right? Whereas, under capitalism the worker appears to be being recompensed for his work, but in fact some of what he produces is accruing to the capitalist. So that's the difference. Principle and practice are at loggerheads under capitalism. It professes the idea that we're entitled to what we make, whereas under socialism we, in fact, do get what we make, and in that sense, socialism is an advance on capitalism. But he says, "In spite of this advance, the equal right is still constantly stigmatized as a bourgeois limitation. The right of the producers is proportional to the labor they supply; the equality consists in the fact that measurement is made with an equal standard, labor." Now what does all that mean? It's rather contorted language, but just look at a little more of the contorted language, and then I'll explain in words of one syllable what it means. This is from The Critique of the Gotha Programme, and what he wants to say is even if you reward people equally, even if you reward people equally for their work, there's going to be inequality in the society. So he says, look, One man is superior to another physically, or mentally, and supplies more labor in the same time (works harder), or can labor for a longer time; and labor, to serve as a measure, must be defined by its duration or intensity, otherwise it ceases to be a standard of measurement. This equal right is an unequal right for unequal labor. So if some of us are stronger than others and we get paid according to how much work we do some of us are going to get more than others, right? "An equal right to unequal labor." It recognizes no class differences, because everyone is only a worker like everyone else (once we get to socialism); but it tacitly recognizes unequal individual endowment, and thus productive capacity, as a natural privilege. It is, therefore, a right of inequality, in its content, like every right. Right, by its very nature, can consist only in the application of an equal standard; but unequal individuals (and they would not be different individuals if they were not unequal) are measurable only by an equal standard insofar as they are brought under an equal point of view, are taken from one definite side only-- for instance, in the present case, are regarded only as workers and nothing more is seen in them, everything else being ignored. Further, one worker is married, another is not; one has more children than another, and so on and so forth. Thus, with an equal performance of labor, and hence an equal in the social consumption fund, one will in fact receive more than another, one will be richer than another, and so on. So if you think about a system in which there is no longer a capitalist, but people are rewarded on the basis of work there's still going to be inequality because some people have more effective capacities to work than others, and because some people have more demands on them than others. If you have more children to feed than somebody else earning the same amount, it's going to leave you relatively worse off. So socialism is not a condition of equality because there is this right based upon labor. And he says, These defects are inevitable in the first phase of a communist society (which is socialism) as it is when it has just emerged of the prolonged birth pangs from capitalist society. Right can never be higher than the economic structure of society and its cultural development conditioned thereby. In a higher phase of communist society, after the enslaving subordination of the individual to the division of labor, and therewith also, the antithesis between mental and physical labor, has vanished; after labor has become not only a means of life but life's prime want; after the productive forces have increased with the all-around development of the individual, and all the springs of co-operative wealth flow more abundantly-- only then can the narrow horizon of bourgeois right be crossed in its entirety and society inscribe on its banners: From each according to his ability, to each according to his needs! "From each according to his ability, to each according to his needs;" that is the definition of the distributive principle under communism, whereas under socialism it's "from each according to his ability, to each according to his work." So socialism is not hypocritical with respect to the workmanship ideal, because unlike capitalism the workers get rewarded for their work, but it's still an unequal society because people are differently abled, and people have different demands to meet. And so finally we have to get to the abolition of the very idea of rights in order to get to the communist idea of "from each according to his ability, to each according to his needs." So that's the Marxian story. I'm sorry I put such long quotations up there, but I think you need them to get a flavor of how he was actually thinking about it. There are these interaction crises all of them making one another worse, declining profits, liquidity crisis, more and more militant workers, and so on and so forth. Finally you get an increase in consciousness, this socialist takeover. It's still an unequal system, but it has the capacity for superabundance, "For cooperative wealth to flow more abundantly," and so we can abolish the idea of right completely and just distribute on the basis of needs. Now I want to start digging into this in a critical way to show you all of the things that are wrong with it, but this is in the spirit of how we treated utilitarianism. That is to say, we're going to see all of the things that are wrong with his argument, in the same way that we did with Bentham and Mill, in order then to see if there's any surviving intuition. And I want to start with this concept of needs. One of the reasons Marx and his followers were so focused on needs was that they were convinced that in order to keep itself going, capitalism manufactures pseudo-wants all the time. We already saw there's this problem with insufficient demand. You've got to get people to want things. You've got to get people to believe if you don't have the latest whatever-it-is, you're a schmuck. My son looks at my Blackberry Tour and he says, "How can you use a phone like that? Nobody uses those anymore. You've got to have a Google phone, and if you don't have a Google phone you're just a complete loser." That's his view of the matter. And because there's this problem of weak demand there's this built in emphasis to constantly manufacture demand for things that nobody needs. But the impulse to create all of these phony needs will go away once you don't need capitalism to innovate. So once you don't have capitalism, you don't need people to constantly think that they have to have the latest gizmo. You don't need to try and get people to think of the 27 kinds of dishwashing liquid in Stop and Shop. This one's better than that one. All these manufactured artificial needs are an artifact of the problem that we have to keep capitalism going, and once that goes away we can think about people having finite needs. And so if society reaches a certain level of abundance then those needs can be met. Who thinks that's a terrible argument? Okay, what's bad about it? Take the--over here, yeah? Student: It just doesn't seem like a good model of human nature and it puts a limit on how much you can want. Prof: It what? Student: It puts a limit on how much you can want. It seems like everyone has to want the same amount of things. Prof: I think those are good points. What else is wrong with it? I mean, surely it's true that some of the things--there is a difference between wants and needs, and a lot of our wants are very frivolous, right? There are a lot of things that we want. A neoclassical system doesn't differentiate because we don't allow interpersonal judgments of utility, right? So Trump thinks that he needs the next hundred million dollars and you might say, "He might want it, but he doesn't need it." But what's wrong with that? I just want you to elaborate on what you were saying, or somebody else elaborate on what you were saying. Yeah? Student: It's hard to know how the government will decide what people need and what they just want. Prof: You're troubled by the idea of the government deciding what people will need and what they just want. I think that's fair enough, but I think Marx in his most utopian moments-- after all, one of the differences between socialism and communism is, there is no government under communism. There's a withering away of the state. So his idea, I think, is there's just going to be all this super--all these goodies everywhere and people will just take what they need. Student: Where does the abundance come from, also? That's another problem. Prof: It's been made possible by capitalism. The productivity of capitalism will have... Student: And then it gets used up and then there's no more capitalism to create more. Prof: Okay, so you're saying we might go to superabundance, but then fall back from it. Yeah, I think those are good criticisms. So why will there be continuing abundance? I think that's a fair criticism, but I think there's a more fundamental problem with it, and that is in distinguishing wants from needs you could say, "Look,"-- even before you get to those problems which you both raised serious questions for it, but even before you get to those questions, somebody might say, "Look. We can at least define needs by what people require in order to survive." Surely if we want to distinguish wants from needs we make a distinction between things that if you don't get them you're going to die versus things that if you don't get them you might not be that happy, but you're not going to die. What about that? Who thinks that that would solve Marx's problem? Anybody? Why wouldn't it? Who thinks it wouldn't? You just said we're going to reach a level of productivity where we can produce what people need in order to survive, and then we could get rid of the regime of rights. I think Marx makes a very valid point when he says, "Any regime of rights is a regime of inequality." Yeah? Student: Well, I think that's kind of keeping with the same thing they had during capitalism. Like if it's still just subsistence level, if that's what you need then that's the same thing the workers had under capitalism. It's not really making communism a better society for the people. Prof: Okay, so you're saying a world in which people only meet their needs may not be a better world. Student: Yeah. Prof: Okay, I think that might be a fair criticism. He wants to say the alternative is that we're all like these rats on wheels running faster and faster to get less and less benefit. So unless we can accept finitude, that we live for a fixed time, and if we can't come to grips with the fact that we are finite creatures we're never going to be happy. So I think that's the debate that he would want to have with you, but I think that even within Marx's own terms of reference there's a different problem and that is this. Suppose you said, "All right, we'll grant you, as a matter of the philosophy of it, we'll grant you, Karl Marx, the proposition that needs are defined by what is necessary for people's survival, and that they should somehow be more important than Trump-- frivolous wants that might make life better, but still in-all making my life better by letting me have a nice car is less important than preventing somebody from starving when it really gets down to it. And so we could make this distinction between needs and wants in that sense. But here's the problem. Even if we're only trying to meet people's needs defined in that narrow sense of keeping people alive, there's still endemic scarcity. This notion of superabundance doesn't make any sense. Just think about it for a minute. Money that we spend on dialysis machines is money that we don't spend on heart transplants or AIDS research. So no matter how much wealth there is, there are still distributive choices to make in this society. There's still going to be the problem that we're going to say, "Well, if we choose to protect needs A, B and C, anyone who needs dialysis should get free dialysis." Whether we make it explicit or implicit is beside the point. Somebody who needs something else to keep them alive is not going to get it. We're not going to do research on cancer because we're spending that next marginal dollar doing research on the causes of AIDS. So once you see that point, then the notion of superabundance becomes incoherent. There is no such thing as superabundance. And without superabundance you can't get a world beyond entitlements explicit or implicit. You're never going to transcend the world of scarcity. Once you see that meeting needs, even defined in this minimal way of what's necessary to keep people alive, involves tradeoffs, what you're really saying is that scarcity is endemic to the human condition. And this is the deepest analytical flaw in Marx's normative theory. Because once you say that scarcity is endemic to the human condition you can't get away from distributive conflict. You've got to find some way whether it's a system of rights, whether it's the market, whether it's the government making decisions about who needs what, somebody or some mechanism is going to make those decisions, right? So you reach the proposition that scarcity is endemic to the human condition and you have to reason about what makes sense taking that for granted. In a way it's ironic that Marx was so blind to this because Marx was the person who said, "Human nature's not static. It evolves through history. We develop and change," and so you would think he of all people would see that the fact that in 1886, the natural life expectancy was, whatever it was, forty-six years, why that shouldn't be taken as a given, or the biblical three score years and ten be taken as a given for that matter. But you'll never reach a condition in which scarcity can be transcended, and that's the ultimate hope behind his utopianism. So it's not that I disagree with any of the points you folks made. I think they're all correct. There are other reasons that you would question it, but the reason this particular argument, I think, cuts at the roots of his theory is that it's the most favorable interpretation of his account of needs, right? There's no more favorable interpretation than limiting them to survival, and even on that account we can't get to a world of superabundance. We will always live in a world of scarcity, and that means that whatever you have to say about distribution has to take that for granted. This is, I think, as powerful a point as the point we made when we came to the end of our discussion of Mill, to the effect that you can't get the politics out of the definition of harm, right? There's no neutral way to define harm. Another way of thinking about this is it's a way of trying to transcend politics, right? One of Engels, not Marx, but never mind Engels was stating a Marxian view, was that the great thing about socialism and communism is that politics gets displaced by administration. That you don't have to worry about distributive conflict in this story because there's superabundance, right? And in Mill's account you don't have to worry about the definition of harm in his case because there will be a neutral scientific account. Neither of them works. So we have a world in which there have to be political choices about distributive questions. Some say leave it to the market, but that is itself a political choice. We'll have more to say about that. So the beginning of the end of Marxism is, I think, the recognition that scarcity is endemic to the human condition. What we will do on Wednesday is explore and unpack some other difficulties with his argument and then see whether and what the remaining insights might be. See you then.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
18_The_PoliticalnotMetaphysical_Legacy.txt
|
Prof: Okay, so our task today is to finish up teaching about Rawls, and then I'm going to take a step back and look at where we've gotten to so far in today's course, because we're really entering a transition moment from the Enlightenment to the anti-Enlightenment, which is what we'll begin with on Wednesday. So just to recap briefly, we had been talking about Rawls's two principles of justice which were really three principles of justice, one for the distribution of liberties, which was his most expansive system compatible with the like liberty for all. One was the principal of equality of opportunity which is 2b in his lexical ranking, which for some reason known only to John Rawls comes before 2a, and the third one was 2a, this so called difference principle, or what used to be called in welfare economics the maximin principle, which says maximize the minimum share. And I had explained to you why Rawls thinks the standpoint of justice is the standpoint of the most adversely affected, which is not a bleeding heart idea but rather this universalizable idea, this notion that if you can affirm a principle, even from the standpoint of those people who are most adversely affected by it, you'll affirm it from every other conceivable standpoint as well. And so this translated into these L-shaped indifference curves per if we start with a distribution like that anywhere in this area would be preferred because all Rawls is interested in is maximizing that distance. That is the size of the share at the bottom. He's indifferent to who has that share. So that, say, a move from X to F over here would be an improvement for Rawls even though it's clearly a massive loss to A and a huge gain to B. He's not interested in that. The point is that the distance from the axis to F is greater here than the distance from the axis to X is there and that's the only relevant consideration. Now sometimes Rawls is called an egalitarian, and as I began to point out to you at the end of Wednesday's lecture, that's really misleading. His principle is rather very underdetermined. That is to say if we compare it with the Pareto principle it completely contains within it the Pareto principle. So that if somebody came along and said, "Well, the best way to benefit the people at the bottom is to have markets-- trickle down let's say that the pie will grow the most and the people at the bottom will benefit the most from a pure market system-- Rawls would say, "Fine," because everything that is Pareto superior is also Rawls preferred. On the other hand, if somebody came and made the case that heavy state intervention to redistribute would in fact work to the greatest benefit of the least advantaged then he would agree with that as well. And so you can conjure up egalitarian results that are compatible with the Rawlsian scheme, or anti-egalitarian results. A further consideration comes in here when we stop just talking about the worst-off individual because you can start to think about, well, what happens if you get a small marginal increment for the person at the bottom paid for by massive cuts on the middle class and perhaps big benefits to the very wealthy. And I mentioned the example of the Reagan tax cuts in the 1980s which had that structure. Rawls would have no objection to that either even though from an egalitarian perspective that would look like a regressive redistribution from the status quo in 1980. So it's a very underdetermined principle. It's not necessarily egalitarian or necessarily anti-egalitarian. All it says is, "Arrange things to the greatest benefit of the person at the bottom." And as I think I mentioned to you, Rawls took a lot of criticism in the 1970s and 1980s from people who would plow all the way through to page 300 and whatever it is in his book to learn that Rawls says he's agnostic between capitalism and socialism. And people say, "Well, to read 300 pages of a book about justice and discover the author is agnostic between the two main political economic systems of the twentieth century isn't exactly satisfying." But Rawls' answer, as I said, is which type of political economic system actually operates to the benefit of the person at the bottom is not a question for political theorists. That's a question maybe for a political economist, maybe for trial and error through policy innovation. It's going to have to get hammered out in the real world of practical political economy. And so it's not a critique of the Rawlsian standard. It's underdetermined with respect to the choice of actual political economic systems. And I think that Rawls is on relatively firm ground there. We wouldn't want to think that what political economy is the most efficient from the standpoint of benefiting people at the bottom is really a philosophical question when clearly it is not. So that's the Rawlsian story. I gave you the big picture at the beginning: his general conception of distributive justice, and then these more specific principles that get added in the course of trying to figure out how the general conception of justice can actually be applied. Now I want to take a step back. I'm going to come back to Rawls a little bit later, but I want to take a step back first and think about where we've come from. And then we'll see how Rawls leads us to think about where we've gotten to. We started out this course by talking about Enlightenment political theory. The Enlightenment being a philosophical movement that really starts in the seventeenth century but gathers steam in the eighteenth century, and I said that from the point of view of political arrangements, there were really two core values of the Enlightenment. One was a commitment to the idea of individual freedom as realized through a doctrine of individual rights as the most important value in politics, and the second was a commitment to science, reason and science as the basis of politics rather than things that had prevailed hither to such as natural law, or tradition, or natural rights, or religious argument. That rather the move, which we saw so dramatically with Bentham but has been present in one way or another with everybody we've looked at, the move is to say, "No, we're not going to appeal to tradition. We're not going to appeal to religion. We're not going to appeal to natural law, natural rights. We're going to appeal to the idea of science, science understood through reason." And so those are the two twin ideas which one way or another have shaped every single theorist that we've looked at. So starting with the commitment to science, remember those early Enlightenment theorists were very different from modern Enlightenment thinkers in that they identified science with certainty. Remember the Cartesian idea that you have certainty about the contents of your own mind, or Locke's point that only we have that kind of privileged access into the contents of our own soul. Remember his famous line to the effect that true and lasting conviction requires inward persuasion of the mind and that can't be forced on anybody by the magistrate or anybody else; that this internal certainty is so important. And for Hobbes we saw that knowledge has its basis in willing things. So we got this rather curious result that he said, "The laws of geometry have the force of laws because they're the product of wills. We make the triangle, if you like, and the laws of politics were like the laws of geometry because we make the commonwealth in the same way that we make the triangle," and that therefore in the early Enlightenment there was this enormous emphasis on certainty as the hallmark of science. But then we saw, as we moved into the mature Enlightenment with people like John Stuart Mill, that actually they had a very different view of science. That the hallmark of science was fallibilism, that everything we think we know is subject to doubt. There isn't anything in our reasoning about the real world that meets the Cartesian criterion of being impossible to doubt. What we think we know is always revisable in the light of more evidence and better scientific investigation. And Mill's defense of the marketplace of ideas, and of competition, and argument was precisely to encourage that. And so the long chapter on freedom of thought and action in On Liberty is really about-- remember how we get from freedom to utility via a system that allows the truth to come out, mainly a system in which ideas have to confront contrarian ideas in the marketplace of public speech, and that science is really a process which makes it possible, or more possible than any other process, for us to approximate the truth, but that's something very different from having this notion of certainty. So the commitment to science becomes the commitment to fallibilism, and the commitment to a system which has room in it for experimental searching after the truth as a perpetual feature of human social association. Now when we look at Rawls and Nozick, and the social contract theorists, we find at least in some respects a sort of throwback to the early Enlightenment, at least in the Nozick and the Rawls that we've talked about to date. That is to say, they are looking for a unique answer to the question, "What principles would people agree upon if they were designing society afresh?" And this hypothetical social contract, so called, depends on the idea that there's a unique answer to that question, right? So they can see that there never was a social contract, but they say, "Suppose we were designing the rules from scratch? What rules would we design?" even though no society was ever created in that way. And the thought was, "Well, if there's an answer to that question, if there's a definitive answer to that question, then we have a standard, a yardstick for measuring actual institutional arrangements." That is to say, societies that come closer to that standard can be judged better than societies that are further from it. And as societies evolve over time, if they evolve toward it they'll be improving, and if they evolved away from it they would be getting worse. So this was the Kantian, or what I called neo-Kantian aspiration to come up with a standard which any rational person must, on reflection, affirm. And if you can say what that is, then you have your yardstick by reference to which you can look at actual political and social arrangements. And it might be worth pausing just for a second to notice that Immanuel Kant himself, who we didn't read in this course so you'll have to just take it on faith from me-- Kant himself was deeply skeptical that that could be done because he thought that social and political arrangements are inevitably dependent on empirical considerations and you're not going to get the universal laws about those things. So Kant himself would have been skeptical of the Rawlsian project. He would have said, "You're not going to be able to get empirical propositions about the organization of society that are going to rise to this level of a categorical imperative." And I think that one implication of our discussion of Rawls is that Kant was, in fact, right. For example, Rawls' story about protecting the position of the person at the bottom assumes an enormous degree of risk aversion. Now he had an answer for why. He said, "Well, there's no necessary relationship between the level of economic development and the fortunes of the person at the bottom," so you had that grave risk assumption. But we saw that even that becomes problematic if you think about very marginal improvements in the condition of the person at the bottom coming at enormous costs for the person a little bit higher up. It's not at all clear that the rational thing to do would be always to, at no matter what cost, to preserve the condition of the person at the bottom. The important article about this published by John Harsanyi in 1975 argues that if you really think that people have a rational view of risk, it would make more sense to choose utilitarianism behind the veil of ignorance than it would make to choose the Rawlsian commitment to maximizing the position of the person at the bottom. Now it's not obvious that Harsanyi is right, but nor is it obvious that Rawls is right. And once you make that admission, then you no longer have a unique answer. You say, "Well, we plug one set of psychological assumptions in about risk and we get Rawls. We plug a different set of assumptions in about risk and we get Harsanyi," and so it's all being driven by the assumptions about human psychology that we plug into the model. And, of course, once you recognize that then you don't have a unique answer. Another way of putting it is that the young Rawls was rather na�ve about what is uncontroversial in economics, psychology and sociology. That is, Rawls made a distinction between the laws of psychology and economics, as he put them, which we do have knowledge of behind the veil of ignorance and which he was treating as uncontroversial, and then the specific knowledge that we have about our particular life plans, goals and so on which he kept hidden from us. But it turns out that there are very few uncontroversial assumptions about human psychology or economics, and so you can't get unique answers. And indeed, if we only restrict ourselves to economics one of the most interesting developments in economics of the past decade is precisely the turn away from standard economistic assumptions into the field of psychology to see how preferences are formed, why people have the risk profiles that they do and so on. And much of modern behavioral economics regards as subjects for study rather than axiomatic assumptions, the source of things Rawls wanted to work with. So there isn't a unique answer. Now as it turns out as Rawls became older he realized that this neo-Kantian venture was built upon a hill of sand, and that he wasn't going to be able to make Kant's ethics do the work in social contract theory that traditionally had been done by natural law for the reason that I just gave you. You're not going to actually be able to get unique results out of it. And so the mature Rawls made a different kind of move, which in some ways is an even bigger retreat from the early Enlightenment than was Mills retreat to fallibilism, and that is the move that comes under the heading of "political, not metaphysical." And that's the name of the subtitle of the article I had you reading about for today. And so here's the intuition. It's counterintuitive until you think it through, but then I think it's actually quite a powerful intuition. And I'm going to explain it to you actually not by reference to Rawls, but by another person called Cass Sunstein who's a lawyer at the University of Chicago. You might hear a lot about him because he's on the long short list for Obama picks for the Supreme Court. So Cass Sunstein might be in the news. Stay tuned. But Sunstein had a different slogan; not "political, not metaphysical," but rather what he called a theory of incompletely theorized agreement. This is not an elegant term, but let me give you the intuition here. We often think, when we think about political disagreement, we often think, well, people can agree on very general things and the devil is in the details. People can agree that freedom is good, but they can't agree on what is actually required for freedom when you get down to brass tacks of arguing about policies. Does freedom require people have universal healthcare? Some people say yes. Some people say no, right? So one view of political disagreement is we can agree at a very high altitude, but then when you start to get into the dirty particulars of everyday life then we can't agree. Sunstein and Rawls in his political-not-metaphysical mode have the almost opposite intuition to that. And so here the sort of example would be, well, think about a faculty in a university trying to decide whether or not a junior person should get tenure. They might be able to agree that the person should get tenure without being able to agree in a million years about why the person should get tenure, or when we think of Congress passing a piece of legislation, when we think of Congress passing the healthcare bill that just went through, warts and all, through the House. You get the votes, but if those people had to agree upon why they were voting for it they couldn't agree in a million years. They all have different reasons for why they're voting for it or why they're voting against, it for that matter, right? So the notion of incompletely theorized agreement is, "You know what? We don't care." We don't care, or Rawls' idea of "political, not metaphysical." The question is, what political arrangements would people with very different values, commitments, worldviews, metaphysical systems, what would they agree on? What would be, to use another one of Rawls's terms, what would be the overlapping consensus? Think of sort of a big Venn diagram where you've got a lot of circles mostly not overlapping, but they all overlap in one area. That's the overlapping consensus, okay? And we don't care about the parts that don't overlap. So it's much less rationalistically ambitious because now if you think back to the first principle we don't have to say that the fundamentalist would agree that she or he has more freedom in a disestablished church regime than the non-fundamentalist would have in the fundamentalist regime. All we have to say is that the fundamentalist accepts this. We don't know why. We don't care why. So that's the notion of incompletely theorized agreement, or political, not metaphysical. We're just going to look for what is the overlapping consensus for people with very different worldviews, metaphysical systems, beliefs, et cetera. Okay, and so another way you can think about this is, instead of saying, "First I'm going to convince you of my metaphysics, and my epistemology, and my theory of science, and then when I've persuaded you about all of those things I'm going to show you how my political theory follows." On the Sunstein or mature Rawls view that's a mug's game. You're never going to do it because people are never going to agree about all of those things. And more important for politics we don't need them to agree about all of those things. All we need to do is find the overlapping consensus that they will affirm. So they might affirm a series of political arrangements, institutions, for very different reasons from one another, and it doesn't matter. We don't need it to be any more robust than that. And so that's where the mature Rawls winds up. And as we'll see in the final lectures of this course there is a certain democratic element to this, but it's under-theorized in Rawls. And the analogy I'll just mention and I'll come back to it later in the course is--the analogy is the secret ballot. We don't require people to give reasons for the way in which they vote. They can have reasons for choosing the same candidate as we choose that we would regard as completely idiotic. We don't care, right? So the political-not-metaphysical move; or the incompletely theorized agreement move is analogous to the idea of the secret ballot in that we become much less demanding. People don't have to have good reasons for voting the way they do. Their reasons are their private business, okay? And so the political-not-metaphysical move builds on that kind of intuition, and obviously it's a huge retreat from the original Enlightenment motivation to get principles that must follow scientifically for any clearheaded-thinking person. So the mature Rawls is a kind of extreme retreat, you might say. Even though the young Rawls is an Enlightenment thinker with all the zeal of a Jeremy Bentham, the mature Rawls really gives up on the Enlightenment project. And, of course, you then get into the question--he still thinks his three principles would be affirmed. He thinks these three principles are part of this overlapping consensus, but he has no way of knowing that that, in fact, is true. It's just an assertion that it's true and it's not necessarily the case. So what I'm going to suggest to you in later lectures is that Rawls actually retreats too far from the Enlightenment project, and that there's a way of thinking about the mature Enlightenment that's consistent with a democratic political outlook that doesn't give up so completely as the political-not-metaphysical view does on the Enlightenment project, but that's for the future. Let's first focus on the other element of the Enlightenment. I said the one was this commitment to science and we've seen how that played out now from the seventeenth century to the late twentieth century, and this affirmation, but this gradual retreat from certainty that marked the march from the early Enlightenment thinkers through Mill to Rawls, Sunstein and others. But now let's focus on the idea, the normative idea of individual rights. Remember we said that the summum bonum, the most important value of the Enlightenment is this idea of individual freedom recognized or institutionalized by a doctrine of individual rights. The rights of the individual are somehow sacrosanct. And we saw that in the early Enlightenment, this had a theological basis. Remember I said to you that Locke was tormented by the theological controversy between the two sides, some of whom said God is omnipotent, but if you said God is omnipotent, that seemed to undermine the idea that the laws of nature could be timeless, because if God is omnipotent he could decide to change them tomorrow. So either God is omnipotent or the laws of nature are timeless, but not both, and he wrestled with that. If you go and read his essays on the law of nature published in 1660 you see him really tormenting himself. But at the end of his life he comes down firmly on what we call the will-based theory, the idea that something can't be a law unless it's a product of a will. And so we'll go with the omnipotence and let the omniscience fall by--and let the universalism fall by the wayside. And so that was the idea of God owns his creation because he made it, and God knows his creation because he made it, and then the move Locke makes is that God gave us the capacity to make things for ourselves. We become, as he called it, miniature gods. That so long as we act within the constraints of the law of nature we can behave in our realm in a way that's an analogy of the way God behaves in his realm. We could create things over which we have maker's knowledge just as God created maker's knowledge and rights of proprietorship over his creation, this idea that we're miniature Gods. And then we saw what happened to that idea. We called it the workmanship model. We saw what happened to it over the course of the next several centuries. And particularly we saw that beginning with Marx what you get is an attempt to secularize the workmanship idea. That is to say, to cut it loose from its theological moorings, but still affirm the basic structure of the idea that making confers ownership. And we saw that Marx's version of that ran into trouble because he wanted to say only the worker makes things when in fact we saw the capitalist also contributes to the value of things. And then we looked at the feminist critique of Marx which was, "Well yes, and the stay-at-home spouse contributes to the value of what the worker makes," and indeed even perhaps the Sunday school teacher who drummed the work ethic into the worker contributes to the value of what the worker makes, and so on. So that if you have this idea that making confers ownership you're going to get a complicated web of overlapping and indecipherable entitlements, not any clean argument of the sort that Marx's theory of exploitation aspired to be. And so we saw that if you look at the Marxist tradition they eventually give up on all of that. People like John Roemer, and Jon Elster, and Jerry Cohen, and instead just turn-- they give up on the idea of workmanship and turn to arguments about power. But most people wouldn't find that entirely satisfying because most people do want, at some level, to link what we get to what we do. Even if the notion of workmanship is problematic, most people don't want to give it up entirely. And I think you see this very dramatically with Rawls, because Rawls takes the idea of workmanship apart in a hardheaded way that nobody before him ever did. And this is the debate I was referring to last Wednesday about nature and nurture and moral arbitrariness. Just to remind you, there's this huge debate. It's gone on for 150 years. Are the differences between us the result of nature or are they the result of nurture? And Rawls makes the point, "You know what? It doesn't matter. It doesn't make any difference because in either case these differences are morally arbitrary," right? Remember this? Yeah. So whether I'm a good athlete because of my genes or whether I'm a good athlete because of the way I was raised is immaterial. I did nothing to have certain genes or to be raised in a certain way, and I didn't even make choices that led to those results. So any benefits I get are morally arbitrary. That was the notion that Rawls brought to bear. But then the question is, "Well, why should anyone be entitled to what they make?" And Rawls is really pushed in the direction of a kind of socialization of capacities strategy, as I call it in that piece that I had you read. But it's very unsatisfying because if I spent five years writing a book and you come along and say, "Well, you're not entitled to that book. You don't have any special claim on the capacities," I'm going to get really mad. "I worked really hard on that. I want it. It's mine. Who are you to take it away?" So even if you can't give a good philosophical defense of this workmanship ideal, people are deeply unwilling to let go of it. Indeed, so is Rawls deeply unwilling to let go of it. So Rawls, and we walked through this on Wednesday but I'll just remind you of it again, Rawls makes a distinction between the capacities that we have and the use we choose to make of those capacities. But that's not very--the notion was so if we both have the same IQ, but one of us chooses to work and the other chooses to sit on the couch watching ESPN, the one who works should get more because they chose to work. And the person who sits on the couch should get less because they chose not to work. But that doesn't really work for Rawls because once you've made the move into this land of moral arbitrariness the differences in weakness of the will are themselves distributed in morally arbitrary ways. So perhaps the person who works all day had the work ethic drummed into them a mile-a-minute by some Sunday school teacher or very involved parent, whereas the one who winds up sitting on the couch all day didn't. Their father was off stoned all day or something when they were supposed to be being taught the work ethic. Well, if the capacities themselves are morally arbitrary then the differences in the capacity to use the capacities, so you can get, obviously, an infinite regress. So Rawls tries to kind of build a moat around this implication of his argument, but it doesn't work. And if we had time to talk about other theorists in this tradition, you'd find the same thing. I'll just mention the example of Ronald Dworkin, whom also like Rawls has a resourcist view and also like Rawls sees that the differences between us are morally arbitrary, which they are. And he says, "Well, we should make a distinction between what he calls material resources people have (which is sort of like Rawlsian primary goods) and physical and mental powers. And we should treat the differences in material resources as morally arbitrary, but not the differences in physical and mental powers." But, again, you have to come back and say, "Why not?" He says, "Well, we couldn't redistribute them," but actually that's not true. For instance, think about blind people. You could have a system which said, "Well, if some people are blind we have to compensate them for their blindness because they have a morally arbitrary disadvantage," right? Or, indeed, if you really wanted to be brutal about it, but it's nothing in the logic of what Dworkin's saying rules it out; we should just blind all the sighted people, right? Maybe the technology--we get forcible eye transplants. It's not a path people are going to want to go down, right? It's not a path people are going to want to go down, but it's hard to see why not, right? It's hard to see why not once you take this idea of moral arbitrariness seriously. So two points to make about that. We go back to the very beginning. This was never a problem for Locke, right? And we should remember that because, after all, all of this comes from Locke, this workmanship model comes from Locke. Why is it not a problem for Locke? Because for Locke, if we have differences in capacities, it must have been God's plan, right? Remember Locke's story is human beings are God's creation. Human beings do not create other human beings. He's very clear about this in his discussion of parental rights. We don't own our children in the way that we own our property because we don't create children. God creates children and he implants in human beings the urge to reproduce, but we don't fashion the child. We can't create a child in an architectural sense, and most importantly, of course for Locke, we don't put the soul in the child. So God does all of that. So children are God's property and parents are fiduciary. So we don't own our children. So if it turns out that some of us are smarter than others, or some of us are more hardworking than others, and some of us are better athletes than others, we don't have to--there's no moral imperative for us to have some account of why those differences exist because they're not products of human action. They're products of the divine choice. So in the Lockean story this isn't a problem, but the minute you secularize the workmanship ideal, this issue arises. Why is it that some people should get more than others just because of morally arbitrary characteristics? And the ways in which people who have gone down this path try to not get to the end of it aren't very plausible, right? I mentioned Rawls. I mentioned Dworkin. We could have looked at some others I talk about in that piece, Jerry Cohen being one of them, but it doesn't work. So you're left with the fact that if you embrace the socialization of capacity strategy you're going to get to a place that very few of us want to go to. Now it's actually even worse than that. It's even more problematic than that because everything I've said to you so far presumes that in the absence of a justification for inequality, we should presume equality. After all, I said, "Why should somebody get more just because they work harder if the capacity to work is itself morally arbitrary," right? That presumes that there's some assumption that other things being equal we should all get the same. But why assume that? Why assume that? We could just say, "Well, it's not a divine plan, so some people are lucky and some people are unlucky." There it is. "Losses must lie where they fall," as a famous American judge once said. And gains should lie where they fall. And so you could get Nietzsche. We'll hear a little bit more about Nietzsche when we read Alasdair MacIntyre next week. You could get the view that the strong win and it's just the way it is, and asking moral questions about it is simply irrelevant. Now Rawls thinks he has an answer to that, which is what I started with at the very beginning of the Rawls' lectures, which was essentially--remember when we had the discussion of what's the fair way to cut a cake. And one of you said, "Well, the fair way to cut a cake is to give the knife to the person who takes the last slice and then he will divide it equally, or she will divide it equally, presumption being the person wants to maximize the slice that they get, the residual slice, and the way you do that is to divide it equally. But I made the point that that's not really an argument for equality, right? That's not a moral argument for equality. It's not a moral argument for equality because it assumes what we want to get is equality and then you create the mechanism to generate that result. So if, for example, we added more information and we said, "Well, we know that the six people here waiting for the slice of cake one of them has three cakes at home, and one of them has nothing, and one of them is a diabetic." As soon as you introduce information of that sort then it's not obvious where you want to wind up is equality. And so saying to the person with the knife, "You get the last slice," becomes problematic, right? So all of this is only to make the point that the cake-cutting example does not establish the moral desirability of equality, on the contrary, it assumes you've decided equality's where you want to end up and then you create a mechanism that generates it. And that's the structure of what Rawls does in his theory of justice. He assumes that his principles are where we want to end up, and then he structures the choice situation to generate them. But it means if you favor equality, or if you favor efficiency, or if you favor some other basis for distribution, you have to have an argument for it, some other argument for it other than just that it gets generated in this way. And to the extent Rawls has any argument at all, it's this kind of prudence. This, "there but for fortune go I," the grave risks. We better take care of the person at the bottom because it might turn out to be me. But as we saw, that assumes a view of risk that some people find irrationally conservative, and so we don't really have a clear, clean-cut answer. And so we don't really have a very satisfying evolution of the workmanship ideal from Locke down through the present. It's a theological argument. In Locke's formulation it's got this very nice coherent-- it all fits together, but the moment you try to secularize it you're left with this problem that it leads in the directions people are not going to want to go in, on the one hand. On the other hand they're not going to want to get rid of it entirely either. And so the ways in which people try and hedge in the parade of horribles doesn't entirely work and we're left with this kind of nagging feeling that there's got to be some way to salvage this workmanship idea, but we haven't managed to do it. And so, just as the evolution of science has taken us-- or the evolution of the commitment to science has taken us to a rather uncomfortable endpoint when we get to the world of "political, not metaphysical," so the evolution of the workmanship idea has taken us to a rather uncomfortable endpoint once we get to Rawls's move and his attempt to limit the radical implications of it that he doesn't like. And so we will pick up that story when we come to talk about the democratic tradition in the last few lectures of this course. But before we get there we're going to consider seriously the idea that maybe the whole Enlightenment project was a mistake. Maybe we should reject the whole Enlightenment venture. This was, of course, the view of Edmund Burke, the Irish political thinker that we'll talk about on Wednesday, and it's a long tradition from Burke down through the present. We're going to look at Alasdair MacIntyre as a contemporary defender of this view, but this is the idea that we shouldn't be surprised that the Enlightenment project turns out to be untenable because it was a profound and indeed politically dangerous mistake. We'll start with that on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
4_Origins_of_Classical_Utilitarianism.txt
|
Prof: So today we're going to start talking about classical utilitarianism, and we're going to use as our point of departure Jeremy Bentham, who lived between 1748 and 1832 and came up with the canonical statement of the doctrine of utilitarianism. It's a doctrine which is still very much alive and kicking in the contemporary West despite all of its problems, and we'll have things to say about why that is. But I wanted to make a couple of prefatory remarks first about Bentham himself. There are some thinkers in the Western tradition, I guess in any tradition, who have a particular characteristic that Bentham certainly has, and I think of the folks we're going to read Karl Marx had and Robert Nozick had. And the thing I'm thinking of here is they are the kind of person who takes one idea to the most extreme possible formulation. They ask themselves a question, "How would the world be if this idea that I have is the only important idea," and they take it to its logical extreme, to an excessive kind of formulation. And they will go places with their idea that nobody else will go, and so that makes them a little bit crazy. They're monomaniacal, obsessively consumed with their idea. In the case of Bentham it's the idea of utility, which we're going to unpack a little bit in a moment. But what's always interesting about people like this is that they play out an idea to its logical extreme and that exhibits both its strengths and its limitation just because they're willing to go when others will not go, think the unthinkable, think politically incorrect things for their time in pursuit of really pushing this idea to the absolute hilt. And so Bentham is the kind of thinker who I suspect, at the end of the day, nobody will be fully convinced by, but he's very useful. He's a very useful diagnostician of what it is about utilitarianism that's going to be appealing to you and where eventually you're going to want to put some limits on it just because he goes beyond the limits. And so you can see what happens if you push it all the way to the hilt. Secondly, I want to just say that Bentham is important as a fountain of more than utilitarianism, but also of modern conceptions of value more generally considered. You'll see that there were rumblings of the kinds of things he had to say about value in the seventeenth century. Hobbes, for example, who I mentioned last time, criticized Aristotle for not seeing that what is good for some people may not be good for other people, and Bentham builds on that idea. You'll see Bentham will start to link the good to what it is that people desire. There were also rumblings of Bentham's methods in particularly his aspirations to found politics on scientific principles in the seventeenth century. We already saw last time the Hobbesian and Lockean creationist theories of science, but they were really transitional figures. They also gave theological justifications for their arguments as I explained at some length in Locke, in the context of Locke. I didn't have time to do it with Hobbes, but many of you will know that if you read the second two-thirds of Hobbes' Leviathan, it's almost all about interpretation of the scriptures, showing that his scientifically derived principles are also consistent with the Bible. Bentham sheds all of this. For Bentham he's not interested in appeals to tradition. He's not interested in appeals to religion. He's not interested in appeals to natural law. He dismisses the natural law tradition as dangerous nonsense, "nonsense on stilts." He's only interested in a scientific set of principles for organizing politics. And one of the nice things about Bentham, at least from your point of view, is-- and we'll see that utilitarianism values efficiency in many ways, but one of the interesting things or the helpful things about Bentham is that he reduces his whole doctrine to a single paragraph, and he puts that paragraph right at the front of his Introduction to The Principles of Morals and Legislation. So here you have the kind of Cliffs Notes formulation of Bentham's argument. He says that, "Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do." So this is going to be about describing human behavior and about what ought to be the case, right? What we shall do. ...[T]o point out what we ought to do as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. (That's the throne of pain and pleasure.) They govern us in all we do, in all we say, and in all we think: every effort we can make to throw off our subjection (that's our subjection to pleasure-seeking and pain-avoiding) will serve but to demonstrate and confirm it (to confirm that subjection). In words a man may pretend to abjure their empire (that's the empire of pain and pleasure): but in reality he will remain subject to it all the while. The principle of utility recognizes this subjection, and assumes it for the foundation of that system, the object of which is to rear the fabric of felicity by the hands of reason and law. Systems which attempt to question it deal in sounds instead of sense, in caprice instead of reason, and in darkness instead of light. That is, in a nutshell, Bentham's theory; very bold unequivocal statement. He's saying if you want to understand human beings in a causal explanatory sense all you have to know about them is that they're going to seek pleasure and avoid pain. And if you want to think about what ought to happen in the design of institutions they should be designed around that fact, to accommodate that fact. And he's going to develop a system of laws, a system of government that takes into account and is built upon this assumption about human nature, as he would have called it; human psychology, as we would call it today. Now, I'm going to make five points about Bentham's system to give you some sense of the full dimensions of it, before we start dissecting it and subjecting it to critical scrutiny. I want to make sure that we understand exactly what his system is. And I want to first of all notice that it is what I'm going to call a comprehensive and deterministic account. I call it a comprehensive and deterministic account in that it's an account of all human behavior. He wants to say everything you do is ultimately determined by pleasure-seeking and pain-avoiding. How plausible--who thinks that's plausible? Hands up. Plausible? Implausible? Okay, give us an example, somebody, of something that is not pleasure-seeking or pain-avoiding, anybody, something that is not the result of pleasure-seeking or pain-avoiding. Well, you put your hands up. You must have had some thoughts. Yeah, okay here. Yeah, you take it. Student: Running into a fire to rescue people. Prof: Pardon? Student: Running into a fire to rescue people. Prof: Running into a fire to rescue people. Okay, you run into a fire to rescue people. What do you think Bentham would say about that example? Yeah, over here, sir. Student: The pleasure is actually saving the people, so there is, like, this benefit that you get from it, the pleasure. Prof: The pleasure you get from having saved the people must outweigh the pain of the fire or you wouldn't do it. Any other example? Nobody's got an example? Nobody can think of an example? No? Yes? Yes, sir? Student: Well, there may be some. For example, saving one's child may be purely instinctual rather than driven by pain or pleasure. Prof: So say sacrificing your life to save your child, let's say, to put it in an extreme case. Student: Yes. Prof: What would Bentham say about that? I mean, this seems like a genuine altruistic action. Somebody lays down their life for their own child. How can that be pleasure-seeking and pain-avoiding? What would Bentham say? Yeah? Student: > Prof: Wait, we need you... Student: I mean, clearly the pain of, like, having lost a child, like, outweighs whatever pleasures. Prof: Yeah, I think that is what he would say. Think of the counterfactual. How could I live with myself for the rest of my life if I didn't do it? The pain would be too great. And Bentham considers cases like this sort of thing. Apparently altruistic acts seems ultimately always reducible to the pleasure-pain calculus. One example he considers is people acting from religious motivations and he says, "Ha! Just read the Bible. Look at the descriptions of heaven and hell. Isn't that a made-to-order pleasure-seeking and pain-avoidance?" Hell is described as, you know, the fires of hell, perpetual pain. So, the people who constructed religious doctrines clearly had an understanding of human nature or they wouldn't have described hell in a way that they described it and heaven in the way that they describe it. So, the first thing he wants to say is that this is a completely comprehensive explanation of human behavior. Can anybody think of any example that couldn't be re-described as fitting this pleasure-pain calculus? Yeah? Student: I think that if life is all about pain and pleasure we would be willing to replace our life with one with only, I mean, one only with pleasure, right? But we wouldn't. We value our life by ourself. There is something indescribable quality to it. I mean, life is the sum of all experiences rather than just pain and pleasure, so. Prof: So you think that there is more complexity to human motivation that's just not expressible as or reducible to pain and pleasure. I think that's a very sophisticated and common critique that's been made of Bentham. If you go and read, indeed if you read the obituary of him that was written by Coleridge, I think it was, makes exactly this point. There was this sophistication to human motivation that isn't captured in this idea. I think that the truth is Bentham would have acknowledged some of that, but he would have said, "At the end of the day it's not important because the pleasure-pain calculus overrides when the chips are down. If we're going to think about what it is that's going to motivate people, it's pleasure-seeking and pain-avoidance." Okay, a second thing that you should notice about this doctrine is that I'm going to call it a naturalistic doctrine. In some ways it's astounding that writing almost half a century before Darwin, Darwin was born in 1809 and lived until 1882, so writing almost half a century before Darwin, Bentham grounds his principle in the imperatives for human survival. He thinks that the pleasure-pain principle has a natural biological basis. Although there are religious, moral and political sources and sanctions of pain and pleasure, these are all secondary to the physical sources for Bentham. "The physical," he refers to at one point, as "the groundwork" of the political, moral, and religious. It is included in each of them. At another point he says we are bound by the principle of utility as "the natural constitution of the human frame" often unconsciously and often when our conscious explanations for our actions are inconsistent with the principle of utility. I'll come back to that point. If we didn't abide by the principle of utility, he says in his little essay, The Psychology of Economic Man, he says, "The human species could not continue in existence and that in a few months, not to say weeks or days, we would be all that would be needed for its annihilation." In other words, the principle of utility expresses our objective interests as living creatures. A third point that I'm going to make about Bentham's doctrine is that it's what I will call egoistic, but not subjectivist. Now, that's a lot of babble terminology, but let me explain what it means. The reason I'm using those two words together is that they don't normally go together. That is to say egoistic views are usually subjectivist, so I'm pointing out that they're not. And by egoistic I mean it is just like in all economics assumptions, the assumption of self-interest. People are self-interested seekers after pleasure, and self-interested avoiders of pain in exactly the way you learn about them in an economics 101 textbook. And we'll have occasion to examine that self-interested premise in some depth later. But it's not a subjectivist doctrine in that Bentham wants to say this is true regardless of what we ourselves say about our preferences. It's not dependent upon your acknowledging its truth for its being true, okay? So you might think you're motivated by altruism, or love of your child, or your religious faith. Bentham says, "You're just muddled and deluded. You don't understand. Your subjective understanding is not in accord with the science of the matter." At one point he says, "It is with the anatomy of the human mind as it is with the anatomy and physiology of the human body. The rare case is not of a man's being unconversant, but of his being conversant with it." So just as if you have a pain in your side and you don't know if it's your liver, or your spleen, or your lung, the rare case is you get it right. He once says exactly the same with your motivation. The fact that you don't understand, or wouldn't agree with, or don't acknowledge what's motivating you, so much the worse for you. You just have an inaccurate or incomplete understanding of your motivation. You're just wrong, okay? So it's in that sense picks up on the idea this is an objectivist account. It is objectively the case. Whatever people think about it, whatever people say about it, it is objectively the case that they behave self interestedly in the pursuit of pleasure and the avoidance of pain. Fourth, Bentham's is a radically consequentialist doctrine. Anyone know what that might mean? Anyone want to tell us what you think I might mean by calling it a consequentialist doctrine? Yeah? Yes, ma'am? Student: That in being motivated by pleasure and pain we're concerned with the consequences of our actions. If we know something is going to be painful we'll avoid it, and if it's going to be pleasurable we'll move towards it. Prof: Yeah, but as I said at the beginning of the lecture, Bentham takes everything to the extreme. So we are concerned with the consequences of the action and nothing else. It's an extreme consequentialist doctrine. He's not interested in our intentions, right? The road to hell is paved with good intentions for Bentham. Now, it doesn't matter what people intend it matters what happens, right? It's a radically consequentialist doctrine. We will see that there's an alternative tradition of thinking about ethics and politics that is deeply rooted in human intentions when we come to read Robert Nozick and John Rawls and people who draw on Kant's, Immanuel Kant's ethics, and that's what gets to you-- to give you all of the jargon up front that is what will be called deontological, sometimes contrasted with teleological. Consequentialist is a kind of teleological doctrine. What does teleological mean? Anybody? Yeah, at the back. Teleological system? Student: Well, given that telos in Greek is the end... Prof: The end, the purpose, the consequence, exactly right. So consequentialist doctrines are teleological doctrines. They're all about the consequences, the purpose, the ends, the goals, the results, whereas what we will talk about later when we get to deontological systems, or the antithesis of that, they focus on intentions, on processes, on procedures, on how you do things, not on where you get to, okay? So Bentham is a radical consequentialist, and you judge a doctrine simply by looking, you judge a possible policy, an action, anything you're thinking of doing or not doing simply by virtue of what effect it is likely to have and nothing else. Nothing else matters. Finally, Bentham thinks everything he's doing is quantifiable. I gave you just a sliver to read from his Introduction to the Principles of Morals and Legislation just so that you could get a sense of how this guy's mind actually worked. He really thought it was the case that he could develop a kind of science of utilitarianism where he would figure out exactly how many utils, we might call them. We might call them Standard International Utils, "SIUs," would attach to a pleasure or pain any policy or action, and that eventually you could figure out exactly what all of the optimal policies were for the organization of society. He thought about utility. He thought it had really four dimensions. How intense is it? Duration, how long does it last? Its certainty or uncertainty, that is, probability that the result will occur. And what he called propinquity or remoteness, which modern economists would say we discount pleasure into the future. If you say, "I'll give you a dollar today, or I'll give you a dollar tomorrow," you'll get more utility from the dollar that you get today, okay? So he thought that these were all quantifiable dimensions of utility, so a little unsure about the intensity, but he's sure that everything else can be quantified. And he set about quantifying. He set about trying to figure out a system of legislation not only for his society, by the way, he started writing constitutions for other countries. And when he ran off to Poland and various places and said, "Look, here's my utilitarian constitution for your country," and he was very disappointed when people didn't rush off and implement it right away. So he truly believed that you could come up with a scientifically demonstrable system of organizing society based on the quantifiable character of utilitarianism. So one further feature of this quantifiable character of utilitarianism is that he thought we could make comparisons across people. We could do the math across people. We could add up how much utility one person gets from a possible action, and how much utility or disutility another person gets, and redistribute in order to do what he thought we should do, which was to maximize the greatest happiness of the greatest number. He's a complete consequentialist, so we would do whatever we have to do to maximize the greatest happiness of the greatest number. So, for example, I happen to know that Denise, who's sitting over there, has got a great capacity for utility. She's easily pleased. If you give her a book she'll be just delighted. But Anthony over there is a kind of grumpy guy. If you give him a book he'd say, "Well, why didn't you give me two books? One measly book." So, if I have a choice between giving this book to Denise, or giving the book to Anthony, I'm going to give the book to Denise because she's going to get more utility than Anthony's going to get from having this book. We don't really care who has the utility from a social perspective. We want to maximize the greatest happiness of the greatest number, okay? But then what we might discover is that Leonid over there has an even greater capacity for utility. He is just a utility monster. He's got such a capacity for happiness that any little thing that most of us would think is neither here nor there is really going to make him happy. Well, then we should give everything to him, right? So it's a doctrine that's completely uninterested in the distributive side of utilitarianism except in an instrumental way, and we'll come back to that on next Monday. All you want to do is maximize the greatest happiness of the greatest number in society, the total amount of happiness. Now, here's a further feature of Bentham's doctrine and I think it follows from the consequentialism that you should at least notice, because I think it bears on our thinking about, for instance, the Eichmann problem. And this is actually taken from Robert Nozick's book, who you're going to read later in the semester, in his critique of utilitarianism. He says let's consider the following thought experiment. Suppose your brain was connected by electrodes to a computer and the computer was programmed to make you have whatever experiences give you pleasure and not to have any experiences that give you pain. So you would, in fact, be unconscious, I think actually in Nozick's example, floating in a vat unconscious, but you would believe you were doing whatever it is that gives you the greatest pleasure. And the question Nozick asked is, "Would you want to be connected to the machine?" Who would want to be connected to the machine? Okay, we only have one, two three, four, five, six, seven, eight, nine, ten, fifteen. I see about fifteen candidates for Nozick's pleasure machine. Who would not want to be connected to this machine? Okay, we have probably two-thirds of you. Who's not sure? Okay, some are not sure. Those who wouldn't want to be connected, why not? I mean, this is great, isn't it? You don't have to work anymore. You don't have to do assignments. You don't have to show up to class. You just, you know, for the rest of your life, maybe, you're programmed to have the experiences that give you the most pleasure in life. What could be better than that? Why don't you want to do it? Yeah, over here? Student: Well, I think the point of life is to have like a complexity of experiences, and without experiencing pain at some point pleasure wouldn't be as sweet. Prof: Okay, the point of life is to--hold on to the mic. I just want to follow this a little. The point of life is to have some contrast effects. Richard Nixon said, "Only if you've been in the deepest valley can you appreciate the joy of being on the highest mountain," as he was being run out of the Whitehouse in 1974. Well, you could say, okay, well, in that case we'll program the machine accordingly so you'll have certain painful experiences in order to maximize the net of pleasure and pain. Every, I don't know, every fifth minute you'll have some unpleasant experience just so that you don't forget how pleasant the pleasant experience is. We can do that. Student: Well, then you wouldn't be, like, having free will in experiencing the various experiences. Prof: Okay, so that's different. It's not the contrast. It's not the banality of pleasure, if you like, it's that the lack of free will or autonomy. But couldn't we program it to make you think you were acting freely even though you weren't? I think--what about that? Some people say that's true of us all, this idea we have free will it's a lot of bunk. We're all really basically just acting out sort of impulses and instincts, but we believe we have free will. So you could be made to believe that you're making choices even if, in fact, you aren't. Student: Well, wouldn't the, like, free will be as much a component of, like, the natural physical nature of man... Prof: Well now you're... Student: As well as--I mean, I'm adding more to Bentham, but... Prof: Okay, so that would be a different theory than Bentham's theory, but you can see where this is going, right? That I think there are some people in the room, if we had time to pursue this conversation, there are some people in the room who no matter what you did to the programming in the experience machine they wouldn't like it, and they wouldn't like it for two principle reasons, I think. One has just been articulated which is that somehow this seems like an abdication of your own autonomy. And when we think back to the Eichmann problem one of the things that troubled people was his abdication of his autonomy. He's giving up his free will to say, "Yes or no. I think this is right or wrong. I'm going to do it on the basis of my own autonomous judgment." The second thing I think that people would worry about is who's operating the machine. Who's operating the machine? How do you know that once they've got you floating in that vat what you wanted to have done will in fact happen? And so there's a basic problem of agency and accountability that makes people nervous. But let's just put those things to one side for the moment and focus on the rest of the exposition of Bentham's doctrine. We're going to come back to all of these issues, I promise you. I just want to get everything out on the table. What he says is that the role of government is, "A measure of government (which is but a particular kind of action, performed by a particular person or persons) may be said to be comfortable to or dictated by the principle of utility when in like manner the tendency which it has to augment the happiness of the community is greater than any which it has to diminish it." So again, the bumper sticker version of that for Bentham is, maximize the greatest happiness of the greatest number. And for those of you who like thinking diagrammatically, and as I noted in my opening lecture not everybody does, but if we imagine a two-person society, so A has this much utility, that's the status quo, right? A has this much utility. B has that much utility, and let's say there's some outer limit of possible utility, which we'll call the possibility frontier. Bentham would say if you draw that line there anything that puts us in this, whatever it is, cloudy zone here, would be a net increase in the total amount of utility in the society; pretty straight-forward claim, right? So we went from there to there. Both would have more utility, but if we went from there, say, to there A's utility would have gone up and B's would have gone down, but we don't care, right, because the total amount of utility in this society has gone up. What we wouldn't want to do is come anywhere into this area because then utility, the total amount of utility in this society would have decreased. Okay, so that's basically the story. Now, you might say, "Well, why do you need government at all if this is the story?" Everybody is--whatever they think, whatever they say, whatever they understand, everybody is a mindless pleasure-seeker and pain-avoider, or perhaps mindful pleasure-seeker or pain-avoider, but they have no control over that. They're going to just do what they have to do. Why create government with the principle that it should maximize utility in this society? It seems like an odd thing to do. Why would you do that? Anyone? Yeah? Student: So just looking at that last graph, right? If each person tried to maximize their utility then they'd both want to be on the opposite corners of each other, so then you would get chaos when you extrapolate that to a larger group. You need something to kind of manage everybody's pleasure, I guess. Prof: So people wouldn't voluntarily do things that maximize one another's--maximize the total social utility, right? If taking something from A and giving it to B would increase B's utility more than it would diminish A's utility, well A's not going to go for that voluntarily. B might go and take it, but he may or may not be strong enough to take it. We don't know. So that's a very shrewd observation in response to that diagram, and it actually gets to more sophisticated questions about redistribution and utilitarianism that I'm going to take up on Monday. But there's, I think, before we get to those questions, there's a more fundamental level at which Bentham thinks utilitarianism creates the need for government, and that is that there's a disconnect between what's individually optimal and what's socially optimal even before we get to the redistributive questions. We might call it the market failure theory of government. Where other eighteenth-century thinkers had taken the view that when this, you know, Adam Smith's famous invisible hand, everybody acting selfishly leads to a collectively optimal result. Bentham, we'll see, thinks that's true a lot of the time, but not always. There are certain circumstances in which people are likely not to act in a way that produces a common result. "The great enemies of public peace are the selfish and dissocial passions-- necessary as they are...Society is held together only by the sacrifices that men can be induced to make of the gratifications they demand: to obtain these sacrifices is the great difficulty, the great task of government." And he's thinking really of, and this may be the first formulation of it that we find, what we today call free-riding, freeloading. He thinks about the provision of something like national defense, what we call, economists call a public good. You can't be excluded from the benefits of it, right, and it must be jointly supplied. So it says, If, for example, the commencement or continuing of a war being the question upon the carpet, if, upon his calculation, a hundred a-year during the continuance of the war, or for ever, will be the amount of the contribution which according to his calculation he will have to pay. (You have to pay a hundred dollars a year in taxes to finance this war.) If his expected profit by the war will be equal to 0, and no particular gust or passions intervene to drive him from the pursuit of what appears to be his lasting interest upon the whole. -- he will be against the war and what influence it may happen to him to possess, will be exerted on the other side. Now, why would his benefit be zero? This is rather convoluted prose, but what Bentham's saying is, "If the war is going to be fought anyway, I get no marginal benefit from supporting it. I might as well oppose it, or I might as well refuse to pay taxes in support of it," right? And that is the nature of public goods, that people can free ride on their provision because an economist says the two features of a public good are they must be jointly supplied, everybody has to contribute to them, and you can't exclude anybody from the benefits of them. Like clean air, if we create clean air for some people we're going to create clean air for all people, okay? So people are going to have to be coerced in the provision of public goods. People are going to have to be coerced to pay for the war. So that's one example. Another one that comes up is the so-called tragedy of the commons problem. Suppose you have some common land, and we'll come back to talking about this in connection with Locke's social contract theory. God gave the world to mankind in common, on Locke's story, so long as much and as good is available to others in common. So if you have common land here's the problem. You're thinking about grazing your sheep on the common. If I put my sheep onto that common land it doesn't do any lasting damage, but if everybody grazes their sheep on the land and none of it is allowed to lie fallow, then it destroys the common, okay? So there are too many sheep for everybody to gaze their sheep on the land, but any individual person doesn't have a reason not to graze his sheep. This was finally formulated in a rigorous way by a man called Garrett Hardin. The tragedy of the commons that if you have commons they'll be destroyed because each person will do something that makes individual rational sense but not collective rational sense. So this is, again, it's not exactly the same as the free-riding problem, but it's related to the free-riding problem. I won't see any reason in the world why I shouldn't graze my cow or my sheep on the common, but when everybody does that we destroy the common. It's a bit like walking down the street with a soda can and you think, "Should I take the trouble to cross the street to put it in a recycling bin or just throw it in the trash?" One Coke bottle, I mean, what difference one Coke bottle--it's not going to make--but if everybody doesn't cross the street, the same problem, okay? So these are the areas where there's a disconnect. There's a disconnect between individual utility and social utility, and that is what creates the need for government. We will pursue these questions and much else about classical utilitarianism next Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
5_Classical_Utilitarianism_and_Distributive_Justice.txt
|
Prof: Okay, this morning we're going to carry on talking about Jeremy Bentham and classical utilitarianism. And I'm going to begin by making a few points about the measurement of utility, which we bumped into in a glancing kind of way last time, but we're going to dig into it a little bit. And then we're going to move from that into talking about utility and distribution in classical utilitarianism; how we should think about the measurement of utility across the whole society and what implications Bentham's argument has for that, and I think you'll start to see why classical utilitarianism became such an ideologically powerful doctrine in the eighteenth and nineteenth centuries. So just briefly to recap, we talked last time about Bentham's principle being maximized, "the greatest happiness of the greatest number." The idea being that if you think of, in this case, a very simple two-person society, and you think of that as the status quo, A has that much utility, B has that much utility. Anything on this side of the status quo would be an improvement for society. The greatest happiness of the greatest number will have been increased. Now, that's all very abstract, and by way of trying to make it somewhat more concrete, let's notice two features of utility measurement. The first, as I said to you last time, as far as Bentham is concerned, this is a doctrine of what I called objective egoism; that people are self-interested and behave self-interestedly, but that we can figure out what's likely to motive them regardless of their own interpretation of their actions or behavior. We have our interpretation; remember, Bentham says the rare case-- as with the physiology of the human body, so with the anatomy and physiology of the human mind, it's the rare case that you get it right about yourself. And it's the objective scientific calculus that's going to tell us what maximizes people's utility. Now, you might say, "Well, how is that actually going to work?" So there are two steps here. The first one is that he thinks all utility is quantifiable. I went through that last time, but the piece I didn't mention is that it follows from that that utility is reducible to a single index, and in this case Bentham's thinking of money. Money is going to be the measure of utility in his scheme, and that means that we could think of these units of utility as having a kind of dollar value. So anytime you think this doctrine is crude or extreme, remember my point that this is a guy who takes every thought to the logical extreme. But if so you get one, let's just for simplicity say, say one Standard International Util costs a dollar. And let's suppose you experience two Standard International Utils of pain from coming to class. Then I could make you indifferent between coming to class and not coming to class by paying you two dollars. It could get you to come to class if I paid three dollars, and I would not get you to come to class if I paid a dollar, right? And so that's the second point. In the first instance we say that utility is quantifiable and expressible through money, but then related to that, and as indicated in the example I just gave you, we can work with a doctrine of revealed preference. We can vary the price that we charge admission for the course. So let's say we charge--let's imagine there are three of you and one of you experiences two utils of pain from coming to class, one experiences three utils of pain, and one experiences two utils of pleasure. There's one perverse student in the audience who actually likes coming to the class. So then we would find that if we paid a dollar, one of you would come. If we increase it to two-fifty, two of you would come, and so we could vary the price to get information about your utility. And we could even influence your behavior without actually changing your preferences, and that's a very important distinction to make. Your enjoyment from coming or not coming to class wouldn't change, but your behavior would change if we varied the price; so that we can influence your behavior by manipulating the incentives without regard to what your underlying preferences are, and we could allow them actually to stay the same. You'd rather be at home asleep, but if the price is high enough you'll come anyway. Okay, so that's all well and good at the level of thinking about one individual's behavior, but what about thinking about society in more general terms? When we talk about utilitarianism in Bentham's system, classical utilitarianism, we see that he operates with these numbers that attach to specific actions or policies and that we can make comparisons across individuals. So to put this in the jargon of economists, Bentham allows interpersonal comparisons of utility. Bentham allows interpersonal comparisons of utility. We can say that if you take one unit of utility from one person and give it to another person their utility will go up and the first person's utility is going to go down. Okay, so it's a doctrine of interpersonal comparisons of utility. And for those of you who are mathematicians here it might also be worth noting that Bentham operates with cardinal scales. These are additive things. You can actually think about these as sort of lumps of pleasure or pain experience that are moved around across people and can be added and subtracted. And so I put up just here to sort of--so you can think your way through this doctrine. If you imagine a status quo, a perfectly egalitarian world in which each person has six units of utility, you can start asking yourself, "Well, let's imagine if we could redistribute things." What would that mean as far as Bentham's doctrine is concerned? What I've given in this first column as a potential departure from the status quo is the utility monster example we talked about last time. If it turns out Leonid has a vastly superior capacity to experience pleasure than anybody else, then we could get a huge increase in total utility by taking a lot from B and C and giving it to Leonid, so that we would say "allow," right? Or we could think of this change from the status quo-- we go to a more inegalitarian society and, again, the greatest happiness of the greatest number has increased. We have a world here where there are eighteen utils and a world here where there are nineteen utils. Or think about this case, we might think of this as a kind of schematization of the Eichmann problem. If the utility that the Aryans gain from practicing genocide and ethnic cleansing against the Jews exceed the utilities that the Jews lose, there would be no reason under Bentham's doctrine not to do it. Okay, now there's a certain ambiguity in the phrase, "Maximize the greatest happiness of the greatest number," which Bentham never finally resolves. The ambiguity is whether he's saying just maximize the total, so here the total's bigger than eighteen, here the total's bigger than eighteen, here the total's bigger than eighteen, so it's obviously the case that it's preferable, on Bentham's scheme, to the status quo. Or is he perhaps saying maximize the utility of the majority, the greatest happiness of the greatest number, the greatest number simply meaning the majority? But in that second interpretation you could still get highly inegalitarian distributions being judged superior to the status quo, because if you imagine going from here to here we've got a majority here experiencing twelve utils of pleasure, and here we have a majority of two experiencing seventeen potentially utils of pleasure. So there is some ambiguity there as to just what Bentham meant, but most of the time he is taken as having meant just the crude statement, "Maximize the total amount of utility in the society." And so that nuance between whether we're saying the greatest number means a majority or just the total amount is not something that will detain us any further. Now, you could say, "Okay, so far so good, but isn't all of this a little counterintuitive?" After all, if you compare--let's focus on the difference between the status quo and distribution IV here. These people might be on the verge of starvation. Surely giving them a unit of utility is going to be much more enhancing to their happiness than giving A a unit of utility. Anyone know what the principle behind that idea is? Anyone want to take it? How many of you have done ECON 101, the first econ course? Yeah, so what is the principle that would tell you if you have no food and I give you a loaf of bread, your utility goes up a lot more than if I have ten loaves of bread and I give you a loaf of bread. Somebody? Okay. Student: Diminishing marginal utility. Prof: Diminishing marginal utility, the principle of diminishing marginal utility of all good things. And this is the idea just encapsulated, to make it a little bit more dramatic: if you don't have a car and somebody gives you a Porsche Turbo, your utility's going to go up a huge amount, right? But if you already have a Porsche Turbo and somebody gives you a second one, you're going to get less new utility from the second Porsche than you had from the first. And if somebody gives you a third one you're going to have less utility, less new utility from the third one that you had from the second. It's not that you won't get any new, but you'll get less. And the principle of diminishing marginal utility says that this line will get flatter, and flatter, and flatter toward infinity. You'll always get more utility from a new increment of the same good, but it'll be less new utility than you got from the previous increment of that same good, okay? That's the concept of diminishing marginal utility. The new utility you get diminishes at the margin. Each new Porsche is less valuable to you than the previous Porsche. Now, is that plausible? Anyone think there's a problem with that idea? Yeah? Student: The idea with shoes. If you're given one shoe you're going to get absolutely no utility, but if you're given two shoes, a right and a left, then maybe you'll get more utility? Prof: Okay, so shoes. If we just kept giving you lots of right shoes, there'd be a problem. Student: Right. Prof: Okay, so I think Bentham would have to say it would have to be pairs of shoes, right? Student: Yeah, I guess. Prof: Okay, but that's a great example to start us off on this. What else? Anything else anyone might find problematic? Yeah, over here. Student: Well, it just seems that if we're going by diminishing margin utility that if you had everyone literally dirt poor and always starving, if you give them just a little bit of something their happiness would increase so much more because they got that much little, but people would still be living in misery technically. Prof: Just explain that a little more. Student: Well, marginal utility is if you had a little bit of something for the first time your happiness increases so much more because the first time. So if you were to give people very little food, or anything at all, and then you suddenly gave them a little bit they would get really, really happy about it. But by this then also if they're also very wealthy and they got something more they wouldn't really be happy. So it'd be more beneficial to the utility if they only got a little bit so they would be very, very happy. Prof: Okay. That's a very sophisticated observation. I'm just going to put it one side and come back to it in about ten minutes when I start talking about redistribution. Okay. But anything else about--if we're still focusing on one individual, anything else that might be problematic with this notion of diminishing marginal utility? Over here? Student: Well, let's say you're C. Just because you're rich doesn't mean you don't want to be more rich, and just because you have a certain amount of money doesn't mean more money isn't going to make you equally as happy as it did before. Prof: Okay, that's true, but why is it problematic? Student: I don't know. Prof: Well, I think you hit on something really important. There are a lot of people who think that the principle of diminishing marginal utility means that money is less important to people as they have more of it. After we said the principle of diminishing marginal utility of all good things, right? Money is a way of purchasing good things, so your example might be thought to suggest that this implies the more money you have the less important money is to you, okay? So you're right, but notice what that means. Does it mean that rich people will care less about money? It's a tricky question because the first impulse is to say, "Yes, they'll care less about money," but the answer is no. Why is the answer no? Yeah? Student: They just need more money to get the same amount of happiness. Prof: Exactly. They need more money to get the same amount of happiness precisely because of the principle of diminishing marginal utility. So you got it exactly right to see that money creates some problematic examples for the principle of diminishing marginal utility. But the thing that follows from it is that, for Donald Trump to get more utility, you have to give him a huge amount of new money just for him to get the same amount of new utility as somebody who only has ten thousand dollars, right? So the way to think about the desire for money it's a bit like sort of a heroin addict needs more, and more, and more new heroin to get the same hit, right? So the more money you have, actually the more money you will want in order to get the next marginal increment of utility. So we should expect rich people to be greedy by this theory, not to become more and more indifferent to money. Very important assumption and a lot of people get that wrong when they think about the principle of diminishing marginal utility. Are there any other examples of this doctrine that might make it seem problematic? Yeah, over there. Student: Well, if I had a second Porsche Turbo I would be just really reckless with it and I could do whatever I want. I wouldn't have to protect the first Porsche Turbo as much. Prof: Yeah? Student: I mean it's like there's more you can do with it, right? Prof: Yeah, so why is that a problem? Student: Well, then wouldn't the second car-- I mean, like, say if you have a little bit and you're given a little bit your utility goes up, but you really want to protect that little bit, but when you get more maybe it encourages you to save money, to not spend more. Prof: So you wouldn't want the second one? Student: What? Prof: Are you saying you wouldn't want the second one? Student: Well, why wouldn't I want the second one? Prof: If you had one and I said, "I'll give you my one, it's right out there," you wouldn't want it? Student: It's not that I wouldn't want it. Prof: You wouldn't be like Jay Leno, who--how many cars does Jay Leno have? Student: Too many. It's not that I wouldn't want it, but maybe the utility for the second one in some cases would be more than the utility for the first one so the curve would be thrown off. Prof: Because? Student: Because you want to protect that first one, so, I mean, so you don't lose what little you have. Prof: Okay, so it's a possibility. Any other examples of where this becomes problematic? I mean, think about beer. One beer increases your utility a lot. The next, and the next, and the fourteenth, isn't it going to at some--you know, or taking an aspirin, isn't it going to, you know? No? Student: What about other values like integrity? If you have a little bit of integrity and you get some more, but if you have a lot of integrity, a little bit more is still worth an equal amount. Prof: So integrity is a great example because once you start putting values like that out there it, I think, threatens the idea that it's all reducible to a single index, right? Because you can't--having a little bit of integrity is sort of like being a little bit pregnant, right? Once Eliot Spitzer's integrity is blown it's not like there's some--it's a binary good. You either have it or you don't, right? People either think he's either a hypocrite or he's not, it' a binary thing. Maybe some people are somewhat hypocritical, but it seems like there's a threshold there, one side or the other. So there might be some goods like integrity that are not easily capture-able in this logic. We should put that out there, but yeah, over here? Student: What about health? It's not quite binary because you can be in medium health, but I think it would be pretty useful to be healthy and then super healthy, ad infinitum. Professor Ian Shapiro: Health. The thing about health it's tricky. Actually less so in our day than Bentham's, it's tricky to think about redistributing health, right? Although you'll see we will come up against some pretty bizarre cases. If some people are sighted and some people are blind and you could do eye transplants, should we be transplanting from the sighted to the blind? Arguably the blind person would gain more utility from getting one eye than the sighted person would lose from losing one eye, so shouldn't we do that? So that can also give you some ways of proceeding that would make you queasy, right, if you allow the principle of diminishing marginal utility. What about the examples I threw out there, beer and aspirins? They're a bit like the sort of left shoe examples, right? I don't think those are actually deep problems for Bentham's theory because I think what he would say is, "Well, you'd drink beer and at some point you would sell the beer rather than make yourself paralytically drunk and feel terrible." You'd sell the beer and use that to buy some other good that would give you increasing utility at a diminishing marginal rate. So the main thing is that the fungibility of utility and its expressibility in terms of money, although as was pointed out here, when we think about the diminishing marginal utility even of money, we shouldn't think that that makes you care less about money the richer you get; rather it will make you care more about money the richer that you get. Okay, now, here's a historical statement about the principle of diminishing marginal utility. Every serious economist since the eighteenth century has assumed that the principle of diminishing marginal utility is true, including Jeremy Bentham. You can't do economics without assuming that the principle of diminishing marginal utility is true. And I think if you threw out some of these problematic instances like integrity, I think that what Bentham would have said, or what any economist would have said, "Well, yes, there are some things that are not capture-able easily, or easily captured by this idea, but if you want to get it right, if you want to see how people are going to behave, if you want to get it right, it's a better assumption than any of the competing assumptions you could make. It's going to get you closer to the truth more of the time than not assuming the principle of diminishing marginal utility is true." So Bentham would have probably said that, I think, if questioned or if somebody had probed with some of these counter examples. So it's the best assumption you can make given that you've got to assume something. But now, and now I want to come back to the sophisticated point that was made in the middle at the back there a few minutes ago, when you start to think about the utility that people at the bottom of the social order derive from a particular good, versus the utility that the people at the top of the social order derive from some particular good, because in Bentham's scheme, remember, we are allowing comparisons across individuals. Let's suppose a two-person society, again, and let's suppose it consists of Donald Trump-- well, it can be a multi-person society but we're just going to focus on two: Donald Trump and a homeless woman living out of a left luggage locker in Grand Central Station. Actually there are no lockers at Grand Central but there are at Penn, at Penn Station, okay? And the question is, should we take a dollar from Trump and give it to the bag lady. What? Should we? Yes? No? How many think yes? Okay, yeah, almost everybody. Why? Because by assumption with the principle of diminishing marginal utility we take the dollar from Trump up there, his loss of utility is negligible, but we give it to the woman who's starving down here, and her gain in utility is enormous from that dollar, right? So we should take the dollar from Trump. Let's assume there's no dead weight loss to the government and all of that for right now. We will just keep it simple. We should take that dollar from Trump and we should give it to the bag lady, and the greatest happiness of the greatest number will have increased, right? But then maybe we should take another dollar, shouldn't we? I mean it worked the first time, so we should take a second dollar from Trump and give it to the bag lady, and a third dollar, and a fourth dollar. When are we going to stop? When are we going to stop? Yeah? Student: > Professor Ian Shapiro: Yeah, we're going to stop at the point of perfect equality, right? We're going to keep redistributing until they have the same amount. So now you should be able to start to see why classical utilitarianism was a doctrine that was thought to be profoundly radical and frightening to rich men, because it has this built-in impetus for downward redistribution. You can say well, there'll be cost, there'll be dead weight loss to the state and so on, but still the underlying logic says take it from Trump and give it to the bag lady, right? At the margin that's what you should do. And Bentham completely saw that this was an implication of his doctrine. Now, Bentham was a fairly radical guy. He was a supporter of democracy, which was a radical thing at that time. But he wasn't as egalitarian as all that, and he wanted to temper the downward redistribution that flows from his principle, and so he makes a distinction between what he refers to as "absolute" and "practical" equality. He says, Suppose but a commencement made, by the power of a government of any kind, in the design of establishing it (absolute equality, that's redistributing to equality), the effect would be--that, instead of every one's having an equal share in the sum of the objects of general desire-- and in particular the means of subsistence, and the matter of abundance, no one would have any share of it at all. Before any division of it could be made, the whole would be destroyed; and destroyed, along with it, by those whom, as well as those for the sake of whom, the division had been ordained. He's basically saying, if you want to reduce that to a bumper sticker, he's saying the rich will burn their crops before giving them to the poor, and that is a common argument in politics. It's the sort of reverse of trickle-down, right? Trickle-down is the notion that you allow inequality because the rich will create more wealth for everybody, right? The pie bigger for everybody, and so the greatest amount of utility is increased by allowing inequality. This is the inverse claim. Bentham's saying, "Well yes, in principle absolute equality would maximize the greatest happiness of the greatest number, but in fact if a government set out to do that, the rich would rebel." And this is a claim that is often made in everyday politics. So you'll destroy incentives to work, is the claim that you'll hear when we have arguments about raising taxes in the run up to the fall elections, right? In the transition to democracy in South Africa people said the white farmers will destroy their farms before turning them over to the majority. It turned out not to be true. So those examples put on the table, what sort of force does this claim have? It's really an empirical claim, and we don't really know how much the rich will tolerate before burning their crops. Presumably they'll allow some redistributive taxation, but we don't know how much, and a lot of the day-to-day argument of politics turns around how much. So Bentham makes a distinction between absolute and practical equality, and he says, "We should redistribute to the point of practical equality, but not to the point of absolute equality because redistributing beyond practical equality has this perverse counter-trickle-down logic and that's not going to be acceptable from the standpoint of the principle of utility." Okay, so when you allow both interpersonal comparisons of utility and you assume diminishing marginal utility, utilitarianism becomes a very radical doctrine. You can hedge it in to some extent with claims of this sort, but they are themselves controversial and you're going to get into a very messy world of macroeconomic predictions and counter-predictions about whether and when you reach this point of practical equality, or when the gains from downward redistribution are offset by the losses from the shrinking of the pie. Now, some of you might have said, "Well, at the beginning of this course of lectures, Shapiro said, 'Every Enlightenment thinker is committed to postulants. One is that we can have a scientific theory of politics, and the other is that individual freedom operationalizes a doctrine of rights is the most important good.' Now, having sat through these lectures on Bentham, I can see what he's saying about science. Bentham has this monomaniacal view of science. He's got his objective egoism. He can figure it all out, what will maximize social utility, and run around the world writing constitutions for people, can devise a whole public policy that's going to scientifically maximize the utility of society, but I'm not seeing a whole lot of room for rights in this doctrine. It seems to allow ethnic cleansing, even genocide. It seems to allow redistribution from one person to another, all justified on the grounds that this is maximizing the total utility of society. Well, even if it is, how does this respect individual rights?" Am I just wrong? Is there some elementary thing I've missed here? There's not much room for rights in Bentham's doctrine. So I'm just wrong that these Enlightenment thinkers were committed to individual rights? It would be a reasonable inference from what I'm said so far. But remember, for Bentham when we try to maximize utility in the society, individual motivation is vital. This is a passage I read to you last week, but I'm just repeating it, "The great enemies of public peace are the selfish and dissocial passions-- necessary as they are...Society is held together only by the sacrifices that men can be induced to make of the gratifications they demand: to obtain these sacrifices is the great difficulty, the great task of government." He's saying you have to work with individual motivations. You can't ignore them, and I think that is the point that's behind his distinction between absolute and practical equality. The rich will burn their crops before giving them to the poor. You have to take that into account. You have to see individuals as the basic generators of utility. In another piece of Bentham's writing which I didn't have you read, but I'll just put it out there because it's where you start to see our old friend the workmanship ideal creeping by the backdoor into utilitarianism. Bentham says, "Law does not say to man, Work and I will reward you but it says: Labour, and by stopping the hand that would take them from you, I will ensure you the fruits of your labour-- its natural and sufficient reward, which without me you cannot preserve. If industry creates, it is law which preserves. If at the first we owe everything to labour; at the second, and every succeeding moment, we owe everything to law." So another way of thinking about this is, that Bentham's idea of the state is essentially regulatory. It stays the hand of somebody else who would steal your goods, but the government cannot itself create utility. Labor creates utility, and this is why I say that workmanship, that idea that we first confronted when we talked about Locke, comes into utilitarianism by the backdoor, because Bentham's going to say, "Unless you respect individual rights you're not going to be able to maximize utility for the society as a whole." So the state is basically a regulative state, not a state that's actively involved in creating utility for individuals. It will do some redistribution to the point of practical equality, but the basic idea is that the state should be hands-off with respect to the utility creation in the society. It's industry that creates utility--labor, work--so incentives are going to be important going forward if you're going to maximize utility. So that's the way in which we see that even a classical utilitarian like Bentham is going to resist dispensing with the doctrine of individual rights. Now, there's a problem, though, with his mode of doing this, and the problem arises because the claim that the rich will burn their crops before giving them to the poor might not be true. And even if we get to less extreme circumstances like South Africa before and after the transition, when we look at actual debates in contemporary politics in the United States, this is what we see. Ronald Reagan comes in and says, this is in 1980, "If we cut taxes, the pie will get bigger for all and they'll be actually more revenue," and so utilitarianism says do it. And the Democrats say, "No, they won't," and it's an empirical argument. And you will find, if you go back now and look at what happened during the 1980s, perfectly credible economists will line up on both sides because they cut the taxes, but, of course, eight other things happened as well that affect the macro-economy, right? And disentangling how much the tax cuts were responsible for what happened, versus how much many other things that happened were responsible, nobody really knows. Or if you look at the current debate we watched and are watching unfold about the economic stimulus. If the economy turns around between now and November, the Democrats will probably do a lot better than if it doesn't, but the Republicans will say, "Well, it would have turned around faster if we hadn't had all this taxation." And Paul Krugman will say, "Well, it would have turned around even faster if we had had more taxation." And so a lot of the problem in debating incentives, once you get into the real world of macroeconomic policy-making, is that (a) you never have the counterfactual; you can't go and rerun history without the stimulus, right, or without the Reagan tax cuts. And (b) the sheer complexity; so many other things happened--the price of oil goes up, or the commodities collapse, or the dollar, or this, or that, or the Chinese revalue, do or don't change the value of their currency. So that when it gets down to it, you're never going to get a definitive answer to the question what is the point of practical equality. When have we passed the point of practical equality, to use Bentham's terminology? Are we close to it? Have we gone by it? Are we nowhere near it? There have been periods in our history when we've had top marginal tax rates of 90 percent, right? Reagan thought a top marginal tax rate of 40 percent was beyond the point of practical equality. You're never going to get a definitive resolution of those questions. But if we think back to what the aspiration of the early Enlightenment was, it was certainty. To use the example, remember, I read to you from Hobbes, from his Epistle Dedicatory to his Six Lessons to the Professors of Mathematics; he said, "For the things we don't make, we can't know we can only guess about the causes," right? Well, here we're guessing about the causes. We don't really know and there will be--the people who want either policy will be able to find a plausible set of experts to defend their view. So you're getting to this very messy world of macroeconomic prediction, if you want to put some limits on the radical edge of classical utilitarianism. And as a matter of history, that's not how it went. As a matter of history, how it went was to rethink the analytical structure of utilitarianism in a way that completely defanged its radical redistributive edge without any reference to these messy macroeconomic considerations. And just how that happened in the transition from classical to what we're going to all neoclassical utilitarianism is a subject with which I will begin on Wednesday. See you then.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
16_The_Rawlsian_Social_Contract.txt
|
Prof: So welcome back everybody. It probably will take a while to wrestle your brains back to what we were talking about before the break, but I'll do my best to help in that endeavor. We're really finishing up the first two-thirds of the course by talking about John Rawls, a very interesting figure and phenomenon in modern political philosophy. We're finishing up the first two-thirds of the course in the sense that this is the third Enlightenment tradition we're talking about, the first two having been utilitarianism and Marxism. And after we finish with Rawls we're going to talk about the anti-Enlightenment tradition and then the democratic traditions. So Rawls is an odd figure in some ways. If I had been here, or if a predecessor from a prior generation had been here teaching a course like this in the 1950s or early 1960s, and somebody had speculated that maybe a very major figure would emerge in American political philosophy there would have been a lot of skepticism, and there would have been even more skepticism if somebody had said, "And they would have been a theorist of the social contract." And I think there would have been two reasons for that skepticism. One is that political philosophy was really not thought to be a particularly important area of philosophy, of academic philosophy in the 1950s and 1960s. The people who were sort of seen as the cutting edge in philosophy were doing philosophy of language, logic, epistemology, metaphysics, and political philosophy was sort of way down on the totem pole. It was a subdivision of ethics, and sort of ethics for many people, if you like. And the notion that anybody in philosophy who did political philosophy would turn out to be a major figure would have attracted a lot of skepticism among academic philosophers. And yet John Rawls, I think today, and if you polled, if you went around the most prestigious philosophy departments in the world, not just the English-speaking world, today, and said, "Who was the most important philosopher of the last third of the twentieth century, not political philosopher just philosopher," and you ask that question of people in major philosophy departments around the world, Rawls would be cited more than other person, without any doubt about it. I don't have any doubt about it. He'd be cited more than any other person. So that's one reason people would have been skeptical that they just didn't think political philosophy was that important and the sort of serious heavyweights in philosophy did other things. But then I think the second reason people would have been skeptical is that a theorist of the social contract, and everybody knew that the social contract, which as you all know, has been around in its modern form at least since the seventeenth century, everybody knew it had these two huge problems. One was that it didn't have any grounding in natural law that people accepted, and the second was it never was a social contract. We now know from 150 years of anthropology there never was a social contract. Aristotle was closer to being accurate when he sort of treated human beings as inherently political. There was never a pre-political condition. And as you now know, because we've studied Robert Nozick, the revival of the social contract tradition answered both those questions first by replacing natural law with some version of Immanuel Kant's ethics. So Kant becomes the placeholder for natural law, and on the other hand working with hypothetical contracts rather than actual contracts asking the question, "What would people agree to?" not "What did people agree to under certain specified conditions?" But neither of those ideas was central to Nozick, was invented by Nozick. Rather both of those ideas were invented by Rawls, and Nozick was one of many people who reacted to Rawls. I had pedagogical reasons for dealing with Nozick first, namely that his argument grows so directly out of Locke's, but if this was a course in the history of twentieth-century political philosophy of course we would have done Rawls first. And it's really important to say this because Nozick's book would never have been written but for Rawls. And I think the one measure of the importance of Rawls is that there are probably fifty books, and I don't know how many articles that would never have been written but for Rawls. You can go and find Rawls's book in the library and you'll find it on the shelf next to the book just books listing the citations to Rawls's book. And so that's a very interesting fact. It's also an interesting fact because Rawls is not a great in the sense that we think of as Hobbes, Locke, or Mill, or Dewey, who we don't read in this course, as greats in the sense that most of those people had a view of the world that ranged right across all domains of knowledge. So if you read Locke, he had a view of knowledge, a view of language, a view of theology, a view of politics; a view of everything. Mill had a theory of science. He had a mathematical theory. He had theories of meaning. He had an epistemology, and he had a theory of politics. Dewey, same story, there's a whole worldview that's worked out in every what we today think of as disciplines, but for most of the tradition we're not divided up in the way we divide things up today. But the people we tend to call greats had a view that ranges across the whole gamut of knowledge. Rawls didn't do that. He doesn't have a metaphysics. He doesn't have epistemology, he doesn't have a theory of science, he doesn't have a theory of language. He only wrote this book, basically. He wrote some articles which lead up to the book and then some things that follow out of the book, but basically his book A Theory of Justice is it. And so he's not a great in that sense of the greats of the tradition, but he certainly has more intellectual staying power than any contemporary, in the broad sense of the word, that you've read in this course or will read in this course. People will still be reading Rawls long after people like me have been forgotten about. So in that sense he's a really important figure, and he's a really important figure also in the sense that even if you don't like his arguments, even if you are completely un-persuaded by all of his arguments you have to come to grips with him. I'm not in sympathy with any of his major arguments, but you cannot work in this field and not deal with John Rawls. That's how important he is, and he's going to be for a long time. So that's just by way of background and letting you know what you're dealing with. One other thing I'd say about A Theory of Justice is, it is not a well-written book. It's not badly written in the sense that it's unclear. Any given paragraph is clear enough if you sit down and figure out the jargon means. It's not hard in the sense that, say, the technical sides of Marx or Pareto are hard, but it's not captivating writing. You need a chair with a hard back to read this book, and there's a reason for that. The reason is that although the book was published in 1971, Rawls actually came up with the main ideas in the 1960s in a couple of articles, the most famous of which was called Justice as Fairness, and he circulated these articles in the profession, in the philosophy profession, and he kept getting criticisms. And eventually he had book manuscript, and he circulated the book manuscript and he kept getting criticisms. And every time somebody sent him a criticism he added three paragraphs to address the criticism. This is not the way to write a book if you want it to be captivating. So it's got this kind of--almost plodding quality that's sort of at variance with the hype I just gave you about his importance, but it's to do with the composition of the book, I mean, there's ten years of endlessly fiddling with this manuscript. And I should also say that he eventually did a second edition of the book later in his life, which is a substantial rewrite of it from the first edition. So he was somebody who couldn't stop fiddling, and it's not a trait to which I would commend to you, but in any event there it is. And it's a long, and if not plodding, certainly ponderous book, and my goal in these lectures about Rawls is to try pull out the main ideas for you, and particularly the main enduring ideas. Because Rawls, like everybody else we've read, as an architectonic theorist fails. The pieces don't add up. There are big logical holes in the big structure. So if you want it to be the silver bullet or the final word it's not going to happen. Nonetheless, there are very important enduring insights and questions Rawls put on the table which have not gone away and are not going to go away for anybody who wants to think about the fundamentals of political association. So what are these ideas? Well, they get, I think, mixed up to some extent, or hidden, or obscured, or made to seem more complicated than they should be partly because of the architecture of his theory, partly because of the way he does the exposition. He has this story about the original position, which is his version of the hypothetical social contract. And let me just give you the intuition, but I want to preface it by saying it's actually not important to his theory. It's really an expository device because what he does is he structures a hypothetical choice, and then he gives you certain kinds of information to get you to choose a certain outcome. So unless the outcome is itself independently desirable, the fact that this thought experiment leads to it is of no interest. Let me give you an example before we get into Rawls, which is one that he himself gives. I'm not sure if it's one of the excerpts you read or not, but this is an observation that has been around long before Rawls. He says, "What is the fair way to cut a cake?" Is this in what you read anybody, "What's the fair way to cut a cake?" Probably not, I'm sure you spent your spring break reading through Rawls. Yeah? Student: > Prof: Correct. So the person with the knife gets the last slice, and what will they do? Student: >. Prof: But how will they divide it? Student: >. Prof: There are two assumptions there, okay? You say, "What's the fair way to cut a cake?" The answer: "The fair way to cut a cake is the person with the knife gets the last slice." What will they do? They will divide it equally, right? That's how the person with the knife gets the biggest possible slice, right? Right, yeah? Anyone think that's not the fair way to cut a cake intuitively? Okay, well there are two assumptions there that are worth bringing to the fore just for purpose of what I'm going to say to you about Rawls in a minute. One is that we think dividing the cake equally is the right, you know, we've devised a system where the cakes can get equally divided, right? But do we think it should be equally divided? What if I added other information like one of the people in the room was starving and hadn't eaten for three days, or one was a diabetic? We could add other information which would make you wonder, do you want to get an equal division, right? So the cake-cutting example doesn't show you that equality is a good thing. It presumes that you've already decided equality's a good thing and you want to get the person to choose equality, right? Then the other thing it assumes is that people are going to behave self-interestedly, right? When we give the person the knife and say, "Divide it however you like. You get the last slice," we're assuming that she or he will want to get the biggest possible slice. So immediately we've got two assumptions built into there, one that equality's a good thing. That's the result we actually do want to get, and secondly that people are going to behave in a self-interested way, right? Which isn't to say they're bad assumptions, but it's to note that they are assumptions, okay? Now, Rawls' original position has the same structure as the cake cutting for both of those reasons. He has a distributive outcome that he wants to convince you is a good thing, and he's going to create a hypothetical choice situation that will lead you to it, right? But that doesn't itself establish that it is a good thing. You have to have some other argument to convince you that it is a good thing, and I'll tell you what that argument is, but it's completely independent of this expository device that's modeled on the cake cutting. So the expository device that's modeled on the cake cutting goes like this. It says imagine you had to design a social order, a society in the broadest sense of the word. It will include an economic system, a political system and so on, and you didn't know whether you were going to turn out to be rich or poor, male or female, what race you were going to be, whether you were going to be an athlete or a nerd. You didn't have any particular information about yourself, whether you're going to have a high IQ or a low IQ, musical, not musical, good athlete, a bad athlete, nothing. You didn't have that kind of information about yourself, which doesn't mean that there could be people who didn't have those characteristics, right? Just like say you had to design the rules of chess and you didn't know whether or not you're going to be good at using bishop, better at using a bishop than using a knight, but you had to agree on certain rules, okay? So the rules for designing society you're going to choose while being ignorant of what he calls particular facts about your circumstances. You're going to know only certain pretty general things like he says, "It's a world of moderate scarcity." So it's not superabundance, which is a good thing, because we found out when we studied Marx that there's no coherent of superabundance. Moderate scarcity, it's not a developing country, what we think of today as a third world or developing country. It's basically principles for countries of the sort we live in, okay, so moderate scarcity. And we're going to assume certain basic what he calls laws of psychology and economics, and I think that people largely behave self-interestedly is the most important of those. But beyond that you're not going to have particular knowledge about yourself and your circumstances. In particular, sorry to use particular in two conflicting ways, but in particular, you're not going to have the kind of knowledge that would allow you to bias things in your own direction. So that if you knew you were going to turn out to be female you could say women should earn twice as much as men, but you're not going to know whether you're going to turn out to be female or male. So the kind of knowledge you're going to be denied is the kind of knowledge that would let you bias things in your own favor, okay? So that's the sense in which he's trying to be Kantian. He calls his principles, "procedural expressions of the categorical imperative." There's a mouthful for you on the first day back from spring break. We know what the categorical imperative is, right? It's the imperative to choose things that are universalizable, things that you would will regardless of the consequence, so things you would will from every conceivable standpoint. And what Rawls is trying to do when he says there's a procedural expression of it, what he's trying to do is say, "Well, if you don't have knowledge of which kind of person you're going to turn out to be in terms of rich or poor, or male or female, or black or white, or Hispanic, or some other ethnic group, or religious of some sort, or atheist, you don't know any of those things. You're going to have to think about, what are the best social rules for people regardless of who they turn out to be? And that's the sense in which he wants to think of himself as a Kantian. So whereas for Nozick it's sort of just a slogan, for Rawls it's really built into the structure of his argument, okay? And the idea of the original position is to force us, even while recognizing we're self-interested, to force us to think about society as a whole, to think about what would be desirable regardless of who you turned out to be. And so then the basic way the book, if you had time to read the whole book, the basic way the book proceeds is he starts out with this complete veil of ignorance and tries to get you to agree with him. In this sense it's not even really a social contract. He's not saying, "Would you agree with one another?" What he wants to say is, "Will you, the reader, agree with me, John Rawls, that any rational person would choose the principles that I'm arguing for?" In that sense he's actually--we can't do it in this course because we didn't read Hobbes, he's actually more like Hobbes than he is like Locke because for Hobbes the social contract isn't legitimate because anybody made it, but because it must be rational to make it. Any rational person, says Hobbes, would agree to give up their freedom to an absolute sovereign because anything else leads to civil war and is just madness. So it's a property of rationality for Hobbes that people will accept the authority of the sovereign. It isn't really a contract. Well, Rawls is more like Hobbes on that point. He saying, "I, John Rawls, want to persuade you, the reader, that any rational person would choose my principles of justice over the going alternatives," because his style of thinking-- people go on and on about Rawls being abstract, and an ideal theorist, and head in the clouds, but actually his actual way of proceeding isn't that. It's comparative. Basically what he does is he says, "Well, what are the going alternatives?" There's utilitarianism. There are other ones you haven't read in this course. There's perfectionism, which is what he thinks of in Aristotle. There's Marxism. "I want to show you that my principle does better than the going alternatives from the perspective of being behind this veil of ignorance. If somebody else comes along with another principle and shows that it does even better than mine then I would give it up." So his basic mode of reasoning is comparative, okay? And so what he does is he has a general principle of justice which he wants to persuade you of first from behind this veil of ignorance, and then more specific applications of it. He ends up coming up with two principles of justice that are the applications which are really three principles, so I'll go through them with you. But as you went more and more into the book he keeps adding information and lets you design more specific institutions and so on, always with the caveat that as you get more information later you can't go back and undo choices that you made earlier, right? So it's sort of like--I don't know if you've been around long enough to ever see Congress go through a base closing exercise for the military where they realize that there's going to be special pleading from every, you know, say they're going to get rid of thirty military bases. Every congressional district that has a base in it is going to have good reasons why-- "Yes, we should get rid of 30 bases, but not the one in our district, right? We don't want to stop making submarines in Groton," right? Whatever it is. And so what they do is they create a commission that agrees on the base closing nationwide, and they have to vote up or down on the entire package, and then they can't start undoing it later. So this has a kind of structure of a base closing commission that as the veil of ignorance starts to be lifted, and then you discover well actually I turn out to be female rather than male I can't then say, "Oh, well women should get certain particular kind of advantage," right? So that's the way the book proceeds. Now I think one other reason I should tell you about-- or actually two reasons I should tell you about, concerning why this book had such a big impact, why this book has had the staying power that it's had. One is it's really very much a book of the 1960s and '70s when there was, to some extent, a crisis of confidence about liberal democratic institutions born of the students' movement, and the Vietnam War, and everything that went with it. That is to say there was a generation of people who thought we needed to have critical standards for evaluating government, and utilitarianism, which was the main alternative around, didn't seem to provide them. And Rawls came up with this notion that we could come up with an independent standard for judging, actually existing political systems, and then use it to see how they measure up. It wouldn't have to be rooted in natural law and all the problems that went with it. It was going to be rooted in this universal Kantian ideal, and it would give us principles by which we could evaluate not only what our government does, but other governments. So I think it was, to some extent, the kind of thirst for criteria that was characteristic of that era that gave Rawls his staying power. But then I think the other reason, the other reason that Rawls had staying power was that he changed the subject that people who had been squabbling about utilitarianism for 150 years had been arguing about, because--and again, you know this now because of the first half of this semester, but utilitarianism had basically been struggling between two variants. One, which we think of in the terminology of this course as classical utilitarianism, we might call objectivist where you make strong interpersonal judgments of utility, and the problem with that as we saw, and as Rawls says repeatedly in his book, it doesn't take seriously the differences among persons. You could see this in the utility monster example. You could see this in the problems that we have with the bag lady and Donald Trump. You can see this in problems with the disabled. That if you don't allow interpersonal-- I'm sorry, if you do allow interpersonal judgments of utility you can do Draconian things in the name of maximizing utility. But if you say, "No we're not going to do that," and you make the neoclassical move, and you then say, "We cannot even allow that taking a nickel from Trump and giving it to the bag lady necessarily leads to an increase in social utility," then you seem to have the opposite problem. So the objectivist problem is it allows people to be used in the name of maximizing utility. The subjectivist version, the neoclassical version doesn't seem to allow any interpersonal judgments of utility. Both are deeply morally unsatisfying, and the proponents of each one tend to make the case for their view mainly by pointing to the demerits of the other view, right? And they're both right. Both of these views have serious demerits. And so part of what Rawls does is he changes the subject. He changes the subject, and he changes it in an interesting way. He says, "Look, the truth is we should be objectivist about some things and subjectivist about other things." And what does he mean by this? He says, "Look, people are substantially alike on some dimensions and unalike on other dimensions. There's deep pluralism of values, yes, but we basically have the same needs, the same physiology. We tend to need the same kinds of resources, so let's focus on resources rather than on utility. Let's focus on some basic resources that everybody needs regardless of whether they're going to be intellectuals, or artists, or sportsmen, or sportswomen, or politicians. There are certain things you're going to need more of rather than less of, other things being equal, sort of instrumental goods you could think of them as. And let's focus on that." And especially in political theory those are a good thing to focus on because after all, we're talking about what the state might or might not do. And the state, as we all know, acts with blunt instruments. This idea of the government sort of putting a utilitometer under people's tongues to find out what their utility is, apart from being technically problematic, nobody wants that. It's morally undesirable. So you have to think about the state as something that acts with blunt instruments and you can only really-- if you want a realistic political theory you should focus on some basic resources in the society that the state could have some impact on. So instead of various competing definitions of utility or welfare Rawls says, "Let's change the subject to talking about resources that have the quality (a) that they're things that we could really imagine the state dealing with, and (b) that are instrumentally valuable to people no matter what they turn out to want in life." So that's a second reason he's important. Academics are not comfortable unless they create an -ism word, so the -ism word is resourcism. Rawls actually is saying resourcism. Stop talking about welfarism. Stop talking about utility, or welfare, or the subjective experiences that people get, but rather the resources that they have at their disposal. So that's a second reason, I think, his views have had a lot of staying power, and people who don't like his particular resourcist theory have nonetheless embraced other resourcist theories, again, sort of in the wake of Rawls, if you like. Okay, so what is the basic idea? What is the basic principle? It's his general conception of justice, of which he says "All social values," by which he means resources as I've just said it to you now, and he's going to say that there are three-- well, he talks about liberties, opportunities, income and wealth, which he treats together. So that's three, and then a fourth one, the social bases of self-respect, I'll come back to all of that in a minute-- "are to be distributed equally unless an unequal distribution of any or all of these values is to everyone's advantage." That is the basic idea. All social values, by which he means resources, should be distributed equally unless an unequal distribution benefits everyone. That is the first and most general formulation of his principle. So let me just backup a little bit. I'll go through liberties, opportunities, income and wealth in more detail starting in a minute. I'm just going to mention the social bases for self-respect briefly and then not talk about it anymore because of time limitations, and because Rawls himself never says anything much about them, and what he has to say I don't think is very coherent. So we'll just forget about the social bases of self-respect for the moment, maybe come back to them later. But so here's the thing, "...are to be distributed equally unless an unequal distribution works to everybody's advantage." Now you might say, "Why?" right? And that's the first question, right? It's the first question you ask. And Rawls is not going to give you a straight answer. There's not a "because" for the reason I said to you earlier. His reasoning is comparative. So he says, "Well, you could say this is one candidate principle. You bring another candidate principle like say utilitarianism, and we'll look at both from behind the veil of ignorance and see which makes more sense to pick," okay? So that's the sense in which he has a comparative-advantage argument, not a knockdown philosophical demonstration from first principles that this must follow. So while I do want to say that he thinks it follows from the nature of reason that you would make this choice, it's only a comparative choice and somebody could come along with something else and convince you that it does better than his principle and then he'd have to accept that. So what he does, and this is his resourcism in action, is he defines these primary goods. Primary goods are these instrumental goods, these things you would, other things being equal, rather have more of than less of. And the three we're going to focus on are liberties, opportunities, and income and wealth. We'll probably only manage to deal with liberties today, but it'll give you a flavor of how his reasoning works. So, what are liberties? Well, they're pretty much the sort of thing in the Bill of Rights, in the American Bill of Rights: freedom of speech, freedom of religion, freedom of association, freedom, actually, to participate in democratic politics is one that he talks about. And his principle for the distribution of liberties is, I just put it up there, he says, "Each person is to have an equal right to the most extensive system of total extensive"-- I told you he is ponderous, "...extensive total system of liberties compatible with a similar system of liberty for all." Sounds like a lot of words not saying very much, so let me show you why it says more than it might appear to say at first sight. Let's take the example of religious freedom. So let's say, "Well, should we have an established religion?" How can we reason about this from behind the veil of ignorance? We don't know, once the veil of ignorance is lifted, whether we're going to be Christians, or Jews, or Muslims, or atheists, or agnostics, or something else, right? We don't know that, right? So how should we think about the question of whether there should be an established religion? Well, and this is where one of his conceptual innovations comes in. He says, "The way to think about it is from the standpoint of the most adversely affected person," because you don't know who you're going to be. So for any principle if you could say, "Well, if I was the most adversely affected person by that principle, and I would still choose it, then it starts to look like a procedural expression of the categorical imperative," because if the person most disadvantaged by it would choose it over the going alternatives then presumably everybody else would, okay? And so this is a misunderstanding of Rawls that people often get into. He says at one point, "If the standpoint of justice is the standpoint of the least advantaged person," but this is not a kind of bleeding heart liberal point. He's not saying the standpoint of justice is the standpoint of the least advantaged person because we should feel sorry for the least advantaged person. That poor bag lady and that rich Trump, isn't that disgusting to contemplate? That's not his point. It's a self-interested point, a completely self-interested point. He's saying, "You figure out what you would choose in this situation of turning out to be the most disadvantaged person. That is the standpoint of justice, not because we feel badly for the most disadvantaged person, but because we want a universalizable principle," okay? That's the point. As I said, it's not a bleeding heart point. It's a self-interest point. It's self-interest in the service of universalizability. It's to get people to pick a principle that they would affirm no matter what, and that's the sense in which Rawls thinks of himself as a Kantian. Okay, now let's come back to religious freedom. Well, if we had an established church and you turned out to be a member of the established religion you would be completely happy, but if we had an established church and you turned out to be a non-believer, or a believer in a different religion, you wouldn't be happy, right? You'd be less happy, at least, than the person who turns out to belong to the established religion. That part's straightforward, but that's not the interesting comparison, it's not the illuminating comparison. So Rawls says, "Think about it like this. The question is whether or not to have an established church, an established religion, right? So think about the person who is not a member of the established religion in a world in which there is an established religion, right? Versus the believer in a world in which there is no established religion." Suppose you're an atheist, and you have no established religion, you're happy, but on the other hand the fundamentalist is unhappy, right? But so for Rawls the relevant comparison is would you rather be a fundamentalist in a regime where there's no established religion or a non-believer in a regime where there is an established religion? And his argument for disestablished religion is that the believer in the disestablishment regime has more religious freedom than the nonbeliever in the established regime, right? To make this concrete, fundamentalists have more religious freedom in America than non-fundamentalists have in Saudi Arabia or Iran. So the reason to prefer disestablishment of religion is that if you're trying to maximize the religious freedom of the least advantaged person you have to look at the least advantaged person in all of the possible regimes of governing religion, right? And so the defense of the establishment clause of the U.S.--he doesn't talk about this example, it's my example, but it's his logic, right? If you said the Rawlsian defense of the establishment clause of the U.S. Constitution that's what it would be. Christian fundamentalists often criticize the establishment clause, particularly the way it's been interpreted by the courts. They say it's presented as neutral among religions and it's not. It's not neutral because it works, you know, people who think there shouldn't be an established religion, get exactly what they want, but we who think there should be, don't get exactly what we want, so it's not neutral. Correct, they're correct, it's not neutral. And Rawls actually contributes to confusion here because sometimes he talks about his theory as neutral. It's not neutral. And the requirement is not that it should be neutral, but rather that it should give the most extensive religious freedom to the person who's most disadvantaged in either system. So as I said, the key point is that the believer has more religious freedom if you have something like the U.S.'s establishment clause than the nonbeliever has when you have a fundamentalist regime. That's the claim. And so you want to give the most extensive system of religious freedom compatible with a like system for all, right? And the way you get to compatible with a like system for all is to look at it from the standpoint of the most adversely affected person, or the least advantaged person in any case. So that's the basic way in which he reasons. And that's why this sort of rather empty-sounding phrase here that is his first principle actually has more content than might appear to be at first sight to be the case, right? So it's the standpoint of justice is the standpoint of the most disadvantaged person not because of being a bleeding heart, but because you want a universalizable principle. And it's just the cake cutting. You're giving the knife to the person who's getting the last slice. You say, "Pick the system that will give you the most religious freedom when you later discover what your beliefs are, right? And that is the principle you should affirm." And that is why a religious fundamentalist should choose the establishment clause of the U.S. Okay, now somebody might come along with some other principle and show that it does better and then we'd have to go through the process again. But so that's the basic structure of Rawlsian reasoning, if you like, about principles of justice. Okay, we'll pause there and pick up on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
14_Rights_as_Side_Constraints_and_the_Minimal_State.txt
|
Prof: So you'll recall from our discussion on Monday we were working our way through Nozick's hypothetical social contract story. The thought experiment he asks us to engage in as a way of thinking about the social contract idea as a basis for political legitimacy. And he asks us, in effect, to suspend disbelief and work through this story with him holding out the promise that he's going to show how the state can be legitimate. And so he said, "Imagine a hypothetical state of nature, not consisting of pre-political people, but of people like you and me in a condition where there was no government," and we would find that to be very inconvenient and inefficient because as Locke said, every single person would have to be an enforcer of the law of nature. Everybody would have to look after their own property. It would be highly inefficient. That, in turn, would lead to the creation of kind of block watch associations, mutual protective associations. They would also, though, not be particularly efficient because we all know from reading Adam Smith that the thing that really increases efficiency is division of labor. And so some people would go into the protection business and sell protection full time, but because coercive force is a natural monopoly, and we know from the discussion last time Nozick agrees with Glenn Beck that it's the only natural monopoly, eventually one of these groups would become dominant. And that he calls the--one of his slogans is, that's the "ultra-minimal state." Now I will just pause here. You might say, "Hmm, well if that was true, if coercive force is a natural monopoly why don't we have a world government?" I mean, if we went from militias to nations, why aren't nations just militias in the world? It stands to reason, right? It's just the same thing on a bigger scale. Does anyone wonder what Nozick might say? Anyone got any idea what Nozick might say in response to that? Otherwise it doesn't seem very plausible. Either coercive force is a natural monopoly or it isn't. No? Am I missing something here? What might Nozick say? He doesn't confront this. It's not anywhere in the book, but it seems like a natural question to ask, no? No? Anyone want to try this? Yeah? Student: He might argue that because of kind of natural boundaries or even artificial boundaries that have been created that the two protection agencies are sort of separated. Prof: Yeah, brilliant. I think you hit the nail on the head. I think he would say it's conditioned by available technologies of force. So if it becomes possible to project force all over the world then we might get to the situation where we will have a world government that indeed countries are just like bigger militias within a country. And if there were the available coercive force to create a world government, it would be done. The famous pacifist philosopher, Bertrand Russell, had a kind of Nozickian view of this. He had opposed World War II. He had opposed the creation of nuclear weapons, but as soon as we dropped the bomb on Hiroshima in 1947, Russell came out and said, "America should immediately declare a world government." So he was Nozickian in his thinking, but what he lacked was the knowledge of a social scientist. He bought the clear-headedness of a philosopher, but it wasn't constrained by the knowledge of a social scientist, because the mere fact that you have the capacity to destroy the world doesn't mean you have the capacity to enforce obedience on the ground. This is a lesson we learned in Iraq after 2003. Yes, we could obliterate the Iraqi Army, but that's something quite different from actually being able to enforce the rule of law within Iraq on the ground, right? And it turns out, we'll go into this in more detail a bit later, but although there is one respect in which coercive force is a natural monopoly in that, for it to be a good, it has to be enforced over a given territory, otherwise you just have the sort of situations we were talking about on Monday. Nonetheless, there are various economies of smallness in enforcement, and the community policing literature discovered this in urban context. So it's not as simple as it looks, but I think just from the point of view of what we need to complete the thought experiment here is that Nozick would indeed say that the reason we don't have a world government is simply that the available technologies of coercion have not yet evolved. And so when it takes Hannibal's elephants to get over the Alps they are a natural boundary in a way that they are not when you can lob missiles over the Alps. And so we should always think when he says "a natural monopoly over a given territory," to some extent what counts as a given territory is going to be affected by available technologies at force. And we could play this out if you think about the transition from the Italian city-states to modern Italy, a subversion of that, the changing technologies of force leads to the creation of larger units. But as the points I've just made suggest, this isn't entirely straightforward and uniform, but it is one basic dynamic. So that's, I think, what he would have said if somebody had raised that question. And so you get a single dominant protective association within a given territory, although we're agreeing that the notion of what counts as a territory is somewhat in flux and conditioned by technologies of force. That is, the dominant protective association has co-opted or marginalized all the others, and then you have a dominant protective association which he calls the ultra-minimal state. But then there are these people, these independents, and he gives various colorful examples of them. But as I said, this was 1974. If we think about it in today's world you can think about these as people who don't recognize the legitimacy of the regime. So they could be people like the gent who flew his plane into a federal building in wherever it was, Texas, I think, a couple of weeks ago. It could be Timothy McVeigh who blew up the Oklahoma Federal Building, or it could be Osama bin Laden. These people are out there and they say, "Well, you might have your dominant protective association, but we don't care because we don't like it, we don't recognize it, and we're not part of it." And Nozick wants to say, "Well, just because coercive force is a natural monopoly, these protective associations can't give their members protection if they allow the Osama bin Ladens and the Timothy McVeighs to run around out there." So what are they going to do? They're going to force them to accept the authority of the state. They're going to force them to participate. They're going to say, "This is the deal. Take it and you can be part of the association, or leave it and you're going to be dealt with accordingly. We're going to lock you up. We're going to kill you. We're going to do something to you," right? That's what's going to happen because it's the case that these associations cannot protect their members if they don't do anything else. I'm going to come back to that in a minute. So it's just, at this level, it's just a claim about what would happen, coercive force being what it is. And then you have what he calls the minimal state, AKA the classical night watchman state of liberal theory, the thing that Glenn Beck has in mind when he says, "The government should protect us from the bad guys and nothing else." The government should protect us from the bad guys and nothing else. The night watchman state doesn't do anything else. In particular it does not redistribute wealth. It does not redistribute income and wealth. It does not go into those Pareto un-decidable zones. Think back to the Pareto diagram. And the reason it doesn't go back to those Pareto un-decidable zones is that we have a very robust doctrine of individual rights, right? Now you might say, "Well, why should we buy that?" And I think what Nozick would say is, he would come back with the doctrine of deep pluralism of values. There's deep pluralism of values. We don't agree. Some of us think we should have a welfare state. Some of us think we shouldn't. Some of us think we should have universal healthcare, some of us don't. We don't agree about these things. And because we don't agree about these things there's not going to be the pressure to produce a redistributive state. We're going to look at it, you know, one of his slogans is "rights as side-constraints on our actions." We think of rights as side-constraints on what we can do to other people. We can't elbow them as we go on our way to maximize our own utility. Rights are side-constraints. They're not end-states. They're not goals. So I don't want to give you unnecessary jargon, but I guess I'll give it to you anyway. I should have put it on a slide. So the philosophical lingo for the difference between Nozick and Rawls, as we'll see later, on the one hand and utilitarianism, at least in Bentham's variance, on the other hand is deontological versus teleological, d-e-o-n-t-o logical, deontological, versus teleological, t-e-l-e-o logical. And what is captured--deontological is a word that comes from the philosopher Immanuel Kant. We'll talk more about him in connection with Rawls. I mentioned to you on Monday the basic notion here is affirm principles that you would be happy with no matter how they affected you. They're not hypothetical imperatives. They're not, "I'll support private property if it makes me rich." They're rather, "If I support private property I'll support it regardless of whether it makes me rich or poor," right? It's not dependent on any particular empirical conditions. You would affirm it no matter what. And Kant's famous example of a categorical imperative is the thing Nozick appeals to. "Respect people's autonomy." Don't treat them simply as means to your own ends, but as ends in themselves. And he's saying, "Well, if we want to respect people's autonomy (this is just a very strong version of Mill's harm principle)-- if we want to respect people's autonomy we can't impose conditions on them that they don't agree with." And given this empirical fact of deep pluralism of values you're not going to get a redistributive state out of my Robert Nozick's little story. So that's where it's going to stop. Now you could say, "Okay, what is really being established here? Why is Nozick walking us through this zigzag? I mean it's, okay, it's one way to spend a Monday and Wednesday morning between 10:30 and 11:30, but what is the point? What really comes out of this? Why is he doing this?" And I think that we need to go back through it a little bit more carefully now and see. One of the things that he says is he's telling us both an explanatory story and a normative story. What I said here is basically the explanatory story. He's saying, "If you took people like us and you said, 'What would happen if there wasn't a state?' they would create a minimal state and they wouldn't create anything more." There's nothing normative in that yet, at least not obviously so. They would create Glenn Beck's utopia and that's all that they would do. You could argue about whether or not he's right about that and I'll come back to that in a minute, but that's not really Nozick's whole agenda. His real agenda is normative. What he wants to say is, "They would do this and this is the only legitimate state." He wants to convince you that this kind of a state, the classical night watchman state of liberal theory, liberal in the nineteenth-century sense of the term, libertarian we might think of it, I think, today, is the only legitimate state. Now, if you said, "Well, why? What makes it legitimate?" the answer is that long and rather difficult to decipher chapter on compensation. I'll give you the bumper sticker version first and then I'll go back and walk through it. And you might think it's tendentious, but philosophers' examples often give philosophy a bad name. They create highly artificial examples that abstract massively from the real world, and people generally are not impressed when they try to connect it to real problems. My priors are to be suspicious of philosopher's examples, but Nozick is an exception. He really was a brilliant guy, and he doesn't do these things gratuitously. There are some interesting and consequential points that come out of this. So here's what he wants to say. This is the crucial set of moves, this one and this one. How is it crucial? What he wants to say is this: we know that this is going to happen, what could make it legitimate? We know it's going to happen because coercive force is a natural monopoly. In fact, being hardheaded realists about power, no minimal or ultra-minimal state is going to permit the bin Laden's and the McVeighs to run around threatening them. So they're going to force them to join. What could make it legitimate for them to do that since these independents haven't agreed to join, right? And Nozick does want to say agreement, consent, is the basis of all legitimacy. Seems like a contradiction. So he says, "What if we put it in the following terms? What if we say to the independent out there, 'You know what, you don't recognize the legitimacy of our operation, but it's too bad because there are a lot more of us than there are of you, and we're going to force you to, we're going to force you to participate, but here's the thing. We understand that violates your rights, you as an independent are having your rights violated, but we as members are having our rights violated by our fear that you may blow us up. Maybe you won't. Maybe you're just a philosophical anarchist that wanders around the fields talking to horses and wants to be left alone, but we don't know that for sure. We don't really know that, and there are some people who want to blow us up out there. We know that. So everybody in our society, in our ultra-minimal state, is experiencing a decline of utility because you're out there and you might blow us up. Fear, we're experiencing fear. Now we understand that you, if you're forced to join, you're going to experience a rights violation, that's true, but what if we could compensate you for it? We're going to force you to join. What if we could compensate you for it and still be better off than we were not having compensated you and experiencing the fear?'" So it's a somewhat muddled discussion in the chapter. When people start reading what he says about compensation, they think he's talking about the members being compensated rather than the independents, but the idea is the members could compensate the independents and still be better off. Now it doesn't mean the members actually compensate the independents. Then there would be an obvious moral hazard problem, right? If my choice was to be a member and pay taxes or to be an independent and get compensated for being forced to join, nobody's going to be a member. We might as well be independents and get compensated. So he's not talking about actual compensation. Rather he's saying, "I'm going to violate your rights; if in principle I could pay you enough to make you whole and still be better off than before I violated your rights then everything's hunky-dory. We're both better off in a Pareto sense." So what he does in that chapter is he works his way through various ways this might happen, and they all fail. They all run into problems. How would you figure the price, etcetera, etcetera, right? And then he says, "Well, it doesn't really matter that I haven't given a watertight account of compensation, but just let's assume that some such account could be given, then we'd have solved the problem." And he's alluding here to about forty years of welfare economics that revolved around this idea of hypothetical compensation tests. Those of you who are interested the famous people were Kaldor, Hicks, Scitovsky, Samuelson, and many others. All were trying to figure out a compensation test that was compatible with the Pareto system as you know it. So a way of thinking about it is we go back to our Pareto diagram and what we were doing when we were talking about neoclassical utilitarianism. Remember we said let's suppose X is the status quo. We know this is Pareto superior. We know this is Pareto inferior, and we can't say anything about the Pareto-undecidables, right? So the question is, is Y better than X for society. There's no answer to that question, and Nozick's endorsing that, right? He saying, "B will think Y is better. A will think X is better. Who are we to adjudicate between them? There's no way to do it." But the people who are worrying about compensation were thinking about a different question. They were saying, "Well okay, we know we can't say that Y is better than X for society, but couldn't we say that Z is better than X. Although Z is Pareto un-decidable, it's on the possibility frontier." Remember the possibility frontier was the locus of points where it's not possible to improve, right? So the compensation theorists were actually interested in these two parts of the Pareto diagram. They were saying, "Can't we come up with some way to say that Z is a social improvement on X even though it's Pareto un-decidable because it's on the possibility frontier? Isn't there some way to say if you move from off the possibility frontier to on the possibility frontier you can say it's a social improvement?" So in the lingo of welfare economics that was the project from Kaldor and Hicks, Scitovsky, Samuelson, and all of the others. They were looking for a way to do that that did not involve interpersonal comparisons of utility; really important that it not involve interpersonal comparisons of utility because if you make interpersonal comparisons of utility you're violating consent, right? If you say, "Well, Z is better than X, but A doesn't agree," you're violating A's rights. So is there some way to do that? And we see this all the time in eminent domain cases when the city buys up property forcibly against somebody's will to build a road. There's this endless to-ing and fro-ing about what is the value of the house, and the question is, can you figure that out without doing interpersonal comparisons of utility? And the problem with what Nozick is doing is that the answer is no. The answer is every compensation test that's ever been devised involves coming up with a common metric, namely money, in terms of which you do the compensation, and because of that, you're implicitly making interpersonal comparisons of utility. So the project of finding a compensation test that didn't violate the interpersonal comparisons criterion failed, and you can't actually do what Nozick wanted us to do. Now you could say--here's, again, an example of-- the whole architectonic theory doesn't work, but there still might be important insights that survive its failure. So you could say, for instance, "Okay, once we're aware of that, we can say that even in the creation of a minimal state, there are some interpersonal comparisons of utility, namely those involved in judging whether or not it's legitimate to incorporate the independents." And Nozick might make the following defense of what he's done. He might say, "Well okay, it was a nice try, but still what's left of my argument is the following." Another Kantian dictum is, "Ought entails can." Anyone want to tell us what that means? What does it mean to say, "Ought entails can?" Anyone know? When we say, "Ought entails can," anyone want to guess? Yeah? Student: The normative statements that we may make may create bright lines by which we can determine whether an action is or is not acceptable, and that's like--by saying that we should or should not do something we create this bright line where on one side we should and on the other side we can. Prof: You're in the right direction. You're making it more complicated than it needs to be. The basic idea is we can't have a moral obligation to do something that's impossible. We can't have a moral obligation to do something that's impossible, that's the way in which "ought entails can" is usually interpreted. Now some people say we shouldn't interpret Kant that way. We should interpret Kant to mean when he says, "Ought entails can," that if we ought to do something, we should find a way. I once had a graduate student who wrote a dissertation in which he said, "'Ought entails can' means that, 'Ought entails must try as hard as possible.'" Well, that's one way you could go with this, but that's not what I have in mind here. What I have in mind here, that Nozick, I think, would say when confronted with the failure of the compensation tests to work without interpersonal comparisons, I think Nozick would say, "Well, 'ought entails can' in the conventional sense. We can't expect people to do something that's impossible, and for that reason what drives us, really, is the natural monopoly of force argument." Another way you could see this, you could say, "Well, so the problem with the compensation argument, compensating the independents is, we could equally say if the independents could compensate the members for their fear and still be better off that would be just as good, right?" Nozick's saying the members experience the fear, they force the independents to join, and if they could compensate them and still be as well-off then it's legitimate, but you could do the exact opposite. You could say, "If the independents could compensate everybody, pay everybody a certain amount so that they stop experiencing fear and still be better off." In effect Osama bin Laden did this when Al-Qaeda made a deal in the 1980s. They promised the Saudis that they wouldn't do any terrorism in Saudi Arabia, and then the Saudi government left them alone. I mean, that eventually unraveled, but so essentially you could say that was a version of compensating the Saudi population for the fear and still being better off. So Nozick would have to be indifferent between those two things if all that's driving it is the compensation. I think what he would say is, "Well, in principle, yes, but in practice, because of the natural monopoly of force, it's the independents who are going to lose. It's the independents who are going to lose. There are fewer of them," right? And in the event, just pursuing this example, in the event since both the Saudis and everybody else knew that Al-Qaeda's ultimate objective is regime change in Saudi Arabia, it wasn't a very credible promise, and indeed there were Al-Qaeda operations in Saudi Arabia in the late 1990s. So it wasn't a very credible promise, and eventually after 9/11 the Saudis smelled the coffee and got onboard with the operation to try and stomp out Al-Qaeda. So yes, in principle, if the independents could compensate blab-de-blah, and just be as well-off, we would be indifferent, in practice, because of the natural monopoly of force, the independents are going to lose. So ought entails can. It's not possible to imagine a state of affairs in which the independents are tolerated, so we'll accept that that is legitimate. In other words, the violation of rights in unavoidable, and therefore it's not illegitimate. That would be the claim. Now Nozick would then say, "So my argument survives. My argument survives, because we've now supplemented the natural monopoly of force with ought entails can. There's no way not to have the ultra-minimal state become the minimal state, so its legitimacy can't be criticized for the violation of the rights of the independents." In a way it's a bit reminiscent of Locke's argument about majority rule that we're going to talk about when we get to the last part of the course in more detail. What you know already about Locke from what I've told you in the past is that Locke says when there are disagreements about what natural law entails there is no earthly authority who's allowed to settle them. Remember we went through this, right? So each person, in effect, has the right to enforce the law of nature, and indeed the responsibility to enforce the law of nature. God speaks to us all individually when we read the scriptures, and if you read them one way and I read them the other way, there's no pope, there's no king, there's no magistrate who can tell us who is right. It's one of the sources of Locke's egalitarianism. Another thing that Locke says, that I haven't talked about yet but will get into more detail, is, so what actually happens in practice when people disagree? He says, "If you think the state is violating natural law not only do you have a right, you actually have an obligation to resist, but if nobody else agrees with you what's going to happen? You're going to be out of luck, right? As Locke puts it he says, "You should look for your reward in heaven," i.e. if you take the view that the government is not legitimate, and you're going to resist it, and nobody else agrees with you, you're going to wind up being tried for treason and executed. That's what's going to happen, but this life is short and the next one's eternal, so why worry, right? It's essentially Locke's view of the matter, right? So the Nozickian independent is in the same position as a Lockean person who reads the scriptures to say that they must resist the state and very few others agree with them. They're just going to get their reward in the next life. So be it. But if lots of people agree with you then you're going to have 1688, then you're going to have a revolution. You're going to get rid of the monarch, right? So you can't know for sure that people are going to agree with you, but that's what's going to happen. And in a way, that is exactly the same point Nozick is making. He's saying, "There are a lot more people in the ultra-minimal state than there are in independent, so it's going to go that way." There it is. It's the nature of power, the nature of coercive force, and ought entails can, bingo. What about that? How many people find that convincing? Nobody? Nobody's convinced? How many people find it unconvincing? Nobody. There are zero convinced, two unconvinced, and 136 undecided, like Massachusetts voters, you people. What's wrong with it? Anyone want to...? Student: Well, it seems that forcing the independents to submit to the minimalist state directly contradicts the pluralistic deontological values that it's espousing, so it undermines the state at its very core. Professor Ian Shapiro: And what does it contradict, the value of consent? Student: Excuse me? Professor Ian Shapiro: You say it directly contradicts the intellectual values he's espousing--which value, the value of consent? Student: The value of consent and how you said it should be minimalist because he believes in pluralism and because you can't force someone, and like the Pareto thing, you don't know. Professor Ian Shapiro: Right, but his answer is ought entails can. It would be lovely if we could get everybody's consent, but there's no way to get everybody's consent. Student: Well then from that follows that the government should be able to enact other kinds of policies where ought may entail can and they can force people to do things. Professor Ian Shapiro: So what would be an example? They would have to just tolerate these independents out there, yeah? There's a little book I used to teach in this course sometimes by a philosopher called Robert Paul Wolff called In Defense of Anarchism, and he basically goes the way you were going. He says, "Consent is the wellspring of legitimacy. You can't get unanimous consent for any state, so no state is legitimate," at least he's consistent, right? You can't get legitimacy for any state so no state is legitimate. All states are illegitimate, finished, end of story. Here's, I think, what Nozick would say. He would make two points. He would say, "Robert Paul Wolff is an airhead because he's forgetting about the difference between the philosophical game we're playing, the thought experiment and the real world because in the real world there are two things that are different. One is the natural monopoly force argument we've already talked about, and the ought entails can limitation that puts on what's possible." And you might not buy that because you might say, if we had time to go back and forth, you might just say, "I just don't buy that. States could protect themselves from independents without wiping out the independents. They could have policies of containment." I even wrote a book about that. So we could go back and forth about that. But I think the other thing that Nozick might say to Robert Paul Wolff is this. It would involve buying the critique of the social contract metaphor. He would say, "Well, we're behaving as though not having collective action is really an option, but it isn't because it's always the case that we have some collective action regime." The question is just, which one. So in any given situation if you require unanimity you privilege the status quo. We know that, right? The garbage-in/garbage-out problem, and if some people want to have a state and some people don't want to have a state, if you start with a state then requiring unanimity to change, it harms the people who don't want a state. But if you start without a state and requiring unanimity to change it, it harms the people with a state. So in either case you're going to be harming somebody's rights, violating somebody's rights. So the bottom line is that anarchy, even if it existed, if we had anarchism, wouldn't meet Wolff's criterion of respecting everybody's autonomy because there would be some people who didn't want that. Okay, we've got two minutes left so I'm going to leave you with what's the beginning of a deeper puzzle about Nozick that we'll pick up on Monday with. You could say, "Okay, we'll grant it. We'll grant everything you're saying, Nozick." But then, what if people start to say, "Okay, we're now a minimal state, but a lot of us are afraid of unemployment, we see recessions come and go, and unemployment can suddenly shoot up to ten percent. I could lose my job, not be able to pay my mortgage." That fear reduces a lot of our utility, so we're going to create unemployment insurance, and some people don't like it. Some people would rather take the risk, internalize the risk, but you know what? We're going to treat them in just the way you treat independents. So we're going to say, "Well, we know you don't like funding unemployment insurance, you'd rather internalize the risk, but you know what, there are a lot more of us than there are you and we believe we could compensate you for the cost in principle. "Of course, we don't compensate in practice. We believe we could compensate you in principle for the rights violation of forcing you who doesn't want to pay for unemployment insurance to pay for unemployment insurance and still be better off. So tough luck, you're going to pay for unemployment insurance and then all of us will be happier because we'll be back on a higher indifference curve than when we were worrying about what might happen if we lose our job." And so the trouble with you, Nozick, is you're too clever for your own good because either your argument doesn't establish enough or it establishes too much, because if we give you this type of reasoning to get to the minimum state we can hijack exactly the same reasoning to get to the welfare state. We can just put it in this idiom of compensation for fear, and we can justify a more extensive state than you want. And so either your argument doesn't get you the minimal state or it gets you too much of a state. We'll start with that next time.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
1_Information_and_Housekeeping.txt
|
Prof: This course presumes no prior knowledge of its subject matter. That is to say you can take this course without having done any political philosophy before. The materials we're going to look at in this course can be approached at a number of levels of sophistication. Indeed you could teach an entire course on just John Stuart Mill, or John Rawls, or Karl Marx, or Jeremy Bentham, and this means that some of you who may have had some prior acquaintance with some of these texts will be able to explore them in a different way from newcomers. But the course is designed, as I said, to be user-friendly to people who are doing this for the first time. There are a few parts of the course in which I make use of technical notations or diagrams. Now, it is true, it's just as fact about human beings that if you put a graph, or a chart, or a curve up on a diagram there are certain people, some subset of the population that get a knot in their stomach, they start to feel nauseous, and their brain stops functioning. I can totally relate to it because I'm actually one of those people by disposition. And what I can tell you about our use of charts, and diagrams, and notations in this course is they're simply shorthand for people who find it useful. But I will do nothing with diagrams and charts that I don't also do verbally. So if you don't get it the one way you'll be able to get it the other way. So you should never feel intimidated. As I said, for people who find graphs and charts useful they're a form of shorthand, but obviously if they intimidate somebody and they make what's being said opaque then they're being self-defeating. And as I said, I will always walk verbally through anything that I also do with charts and diagrams. Secondly, related to that point, it's my commitment to you that this is a course that's done from first principles and everything is explained from the ground up. I might forget that contract sometime and use a term that you don't understand. I might use a word like "deontological," and you'll sit there and you'll be thinking, "What does that mean?" And the high probability is that if you don't know what it means there are probably seventy other people in the room who don't know what it means either. And so if you put up your hand and ask what it means you'll be doing those sixty-nine people a favor because they wanted to know what it means as well. So we shouldn't have any situation in this course in which I'm using some term and you can't follow what I'm talking about because you don't understand what it means. It is a rather embarrassing fact about political philosophers that they don't say in words of one syllable what can be said in words of five syllables. But part of my job here is to reduce them to words of one syllable. That is, to take complex theoretical ideas and make them lucid and intelligible to you. And I see that as a big part of what we're doing here so that your takeaway from this course three months from now will include feeling very comfortable with the language of political philosophy and the central terminology in which it's conducted. So hold my feet to the fire on that if you need to. If I use words you don't understand put up your hand and stop me. I will from time to time throw out questions and we'll have a microphone that we can pass around so that people can answer the questions. It's one of the ways in which I gauge how well the communication between us is going, so you should expect that. So this is a course about the moral foundations of politics, the moral foundations of political argument. And the way in which we organize it is to explore a number of traditions of political theorizing, and these are broadly grouped into a bigger distinction that I make between Enlightenment and anti-Enlightenment thinking. That is to say we're going to start of by looking at the Enlightenment. Now you might say, "Well, what is the Enlightenment? How do you know it when you trip over it?" and that is a subject I'm going to get to on Wednesday and Friday. But for right now I'll say just dogmatically, and I'll elaborate for you later, that the Enlightenment revolved around two ideas. The first is the idea of basing our theories of politics on science-- not on religion, not on tradition, not on superstition, not on natural law, but on science. The Enlightenment was born of an enormous optimism about the possibilities of science. And in this course we will look at Enlightenment theories that put science at the core of political argument. The second main Enlightenment idea is the idea that individual freedom is the most important political good. And so if you wanted to get the bumper sticker version of the Enlightenment account of politics, it is, "How do you scientifically design a society to maximize individual freedom?" Now, within that, we will look at three Enlightenment traditions. We'll look at the utilitarian tradition, the Marxist tradition, and the social contract tradition. And again, I'll just give you the one-line version now and then we're going to come back to all of these, of course, in much greater detail later. The utilitarian tradition says that the way in which you create a scientifically organized society is you maximize the greatest happiness of the greatest number. This is the slogan of utilitarianism. Maximize the greatest happiness of the greatest number. You'll find there're huge disagreements among utilitarians about how you measure your happiness, and how you maximize it, and how you know when you've maximized it and so on, but the utilitarians all agree that that's the goal, and if you can do that you will do more to maximize human freedom than anything else. The Marxist tradition has a very different theory of science, what Marx called the science of historical materialism, but it too was based on this idea that we can have impersonal scientific principles that give us the right answer for the organization of society. One of Marx's famous one-liners was that we will eventually get to a world in which politics is replaced by administration, implying that all forms of moral disagreement will have gone away because we will have gotten technically the right answers. Another formulation of that same idea actually comes from a different Enlightenment thinker who we're not going to read in this course, David Hume, who said, "If all moral disagreements were resolved, no political disagreements would remain." So that's the idea of a scientific solution to what appear to be the moral dilemmas that divide us. So for Marx we'll see a very different theory of science, but for him too, he thinks that freedom is the most important good. That might surprise you. Most people think, "Well, Marx was about equality. He was egalitarian." We'll see that that's only true in a somewhat derivative sense because in the end what was important for Marx was that people are equally free, that they are in a situation of not being exploited, and he too, therefore, is an Enlightenment thinker. Then the social contract tradition says that the way we get a scientific theory of society is to think about what agreement people would make if they were designing society for the first time. If society was going to be based on a contract, what would it look like? And this is what gives us the right answer as to what is-- rational scientific principles tell us how we should organize society, and it's a world in which people's freedom is preserved because it's what they choose to do. Again, as in these other Enlightenment traditions there's massive disagreement about who makes the contract, how they make it, what the content of it would be, but it's the metaphor of a social contract that shapes all reasoning about the way in which you can organize society scientifically in order to preserve freedom. So in the first two-thirds of the course we're going to work our way through those three Enlightenment traditions. But every current has it undertow, and even though the Enlightenment was this enormously energetic and captivating tradition that really starts in the seventeenth century and gathers steam in the eighteenth century, there was always resistance to the Enlightenment, both it's preoccupation with science and its view that individual freedom is the most important good. And so after we're done looking at the Enlightenment, we're going to look at anti-Enlightenment thinking, and the tradition that resists the idea that there are scientific principles around which society can be organized, and resist the idea that the freedom of the individual is the most important good, and we'll explore that tradition. And then in the last part of the course we will turn to the democratic tradition which tries, at least in the way I will present this to you, to reconcile the anti-Enlightenment critique of the Enlightenment with those elements of the Enlightenment that survive the anti-Enlightenment critique, if you see what I'm saying. So democracy becomes the resolution, at least in the way I'll describe democracy in this course. Thereby hangs another tale that I want to tell you about this course. The course is introductory and presented in a user-friendly way to newcomers, but it also is an argument. That is, I'm presenting an argument, a point of view, which some of you will be, "I'm persuaded by," and that is totally fine. The idea is not to make you think what I think or what your teaching assistant thinks. It's rather to make you understand the logic underlying your own views better than you have before, and perhaps see the appeal of views you have hitherto rejected more clearly than you have before. So the idea is to enhance the sophistication of your own understanding of politics, not to have you parrot my views, or teaching fellows' views, or anybody else's views. It's rather to understand the nature of your own views and how they might connect or live in tension with the views of others. One thing you're going to find, I should also say just as a matter of truth in advertising, we're going to look at a number of what I would call architectonic theories of politics, the theories that try to give the whole answer. This is Jeremy Bentham. This is his scientific theory. These are all the pieces. This is how they fit together and this is what it means for the organization of schools, and prisons, and parliaments, and all the rest of it. He's got an architectonic theory of the whole thing. John Rawls, as well, you'll see an architectonic theory of the whole thing. One of the takeaway points of this course is going to be that architectonic theories fail. There is no silver bullet. You're not going to find a takeaway set of propositions that you can plaster onto future political dilemmas. What you're going to find instead, I think what's going to help you in this course, what's going to be the useful takeaway, is rather small and medium sized insights. You're going to find things to put in your conceptual bag of tricks and take and use elsewhere, and they're going to be very helpful to you in analyzing a whole variety of problems. I think if you talk to other students who've taken this course that tends to be the most useful takeaway that you get, that you'll find. When somebody brings up an argument, say, about what people are entitled to you'll have a whole series of questions you would ask about that argument that you wouldn't have asked if you hadn't taken this course. So you'll find a lot of small and medium sized bits and pieces that you can take and use in other contexts, but you're not going to find a one-size fits all answer to the basic dilemmas of politics. Let me say one other thing about this course as being an argument, that the argument's presented from a particular point of view. You might say, "Well," looking through this syllabus, "Hmm, this guy is pretty arrogant. I mean, here we have John Locke, John Stuart Mill, Jeremy Bentham, and he's got his own, some of his own work here on this syllabus. Who does he think he is? I mean, these are the greats of the tradition and he's putting his own work here? It takes a lot of chutzpah to do that." And let me tell you a little vignette that I think will give you the spirit in which my work is on this syllabus. When I was an undergraduate there was a great Kant scholar called Stephan K�rner who was here in the Yale Philosophy Department for many years and also taught at the University of Bristol in England. And I attended his lectures on Kant, and he stood up in the very first lecture and he said, "Kant was a great philosopher, und I am a minor philosopher, but with me you have the advantage that I am alive." So this is the spirit in which my work is there, and it's not remotely intended to be a suggestion that 200 years from now or 300 years from now people will be reading it or that it stands on a par with the classic works of the tradition. But one of our agendas in this course is not just to get you up to speed in the great text of these different traditions, but to give you some sense of how people who currently do this for a living argue about these ideas. So in each one of the five traditions that we look at, we're going to begin with a classic formulation. So Jeremy Bentham is the locus classicus of classical utilitarianism. He's the major formative statement of that view. So we'll start with Bentham, but then we will bring utilitarianism up to the present day. We'll explore how the utilitarian tradition evolved since the eighteenth century and we will bring you up to contemporary considerations about utilitarianism, what people argue about in the journals today, and the book literature, and so on. Likewise with Marxism, we'll start with Marx and Engels themselves and then bring you up to contemporary debates about Marxism. Social contract tradition, we'll start with John Locke who has famously formulated the social contract idea in the seventeenth century, but we'll bring it up to modern contract theorists like Robert Nozick and John Rawls. The anti-Enlightenment tradition we go back to Edmund Burke, the great anti-Enlightenment thinker, an opponent of the French Revolution, but we'll bring anti-Enlightenment thinking up to contemporary thinkers like Alasdair MacIntyre. And finally with the democratic tradition, we'll go back to the Federalist Papers, which is in many ways one of the most important statements of what's at issue with democratic principles, if not a defense of democracy we'll see later, and bring that up to the contemporary literature on democracy which is where my own thinking comes in. But as I say, you should remember Stephan K�rner's admonition that you're getting the benefit of the fact that I happen to be around in the first decade of the twenty-first century, not that I'm attempting to put myself on that kind of a pedestal. Now, I want to say a few more general things about the course just to give you a sense of the flavor of what we do here. You might say, "Well, what is distinctive about this course as compared with other introductory political theory and political philosophy courses that you could take around here?" And I think there are four senses in which this course is distinctive, not necessarily better but just different. And so that you can give you some sense of what it is that you would be letting yourself in for here. The first is what I've just mentioned that with each of these five traditions we really are going to take them from a classical formulation up to contemporary discussions. So you'll have a, at least, working sense of how these traditions have evolved over the course of two or three hundred years and what form debates about them today take. The second is that this course is really going to mix the theoretical with the applied. We are going to look at first principles. I use the terms foundations advisedly in the title of the course there. It's something of a loaded term in that there are some people who think we should do political philosophy without foundations. And I'll have something to say about those arguments later in the course, but I do want to signal with that term we will be interested in foundational questions, the most basic questions you can ask about politics, but we will never limit our intention to those questions. We will work these doctrines through a huge array of contemporary problems ranging from abortion, to affirmative action, to the death penalty, to all kinds of other things that are of concern to you as we go through. So it's very much a part of what we do in this course is to look at how these doctrines actually play out on the ground. So we go back and forth from particular examples to general arguments and back to particular examples a lot in this course, and in that sense it's more of a course, I'd say, in applied political philosophy than many courses one might take, here or elsewhere. A third distinctive feature of the course is that I'm going to organize it centrally around one question, which seems to me at the end of the day to be the most important question of politics. And that is the question that I put in the first sentence of the syllabus there. When do governments deserve our allegiance and when should they be denied it? When and under what conditions should we obey the government, when are we free to disobey the government, and when might we even have an obligation to oppose the government? Another way, if you want to translate this into the jargon of political theory, what is it that makes governments legitimate? What is the basis for legitimate government? That is going to be the core organizing idea or question with which we're going to interrogate these different traditions that we examine, utilitarian, Marxist, social contract, anti-Enlightenment and democratic traditions. We're going to look at how does each one of those traditions answer the most basic questions about the legitimacy of the state. As I say, I think it's ultimately the most important question in politics. It's not the only question in politics. It's not the only way to organize a course in political philosophy, but it is the way in which we'll organize this course. We'll focus our questions on legitimacy, and it'll provide the template for comparing across these traditions, right? We will be looking at how utilitarians, or social contract theorists, or democratic theorists look at this basic question of what it is that makes governments legitimate, how we know that when we fall over it, and what we should do about it. So that's the third sense in which the course is distinctive, and the fourth one I want to mention is that we're going to go back and forward between two modes of analysis which for want of better terms I call internal and external, and let me explain what I mean by those terms. When you look at an argument that somebody puts forward, and you look at in the way that I'm describing as internal, what you're basically saying is does it make sense? Is it persuasive? Are the premises plausible? Do the conclusions follow from the premises? Are there contradictions in what the person's saying? Does it all hang together? Should I believe it? Is it a good argument? That's what internal analysis is about, okay. External analysis is looking at the argument as a causal force in the world. What social and political arrangements is this argument used to justify, or what social and political arrangement is it used to attack? How does this operate as a political ideology in the world out there? What effects does it have if I embrace this argument? So it's not a question about whether or not it's a good argument or you should believe it, but a question about how this argument is efficacious in the world. Because there could be terrible arguments that are nonetheless very efficacious in the world, right? And there might be very good arguments that nobody takes seriously in day-to-day politics. And one of the great aspirations of the Enlightenment is to produce arguments that both make good analytical and philosophical sense on the one hand, and can be influential in the world on the other, but those things don't necessarily go together. And we're going to ask a question, why, in the context of exploring all of these traditions, if there are good arguments that are not efficacious, why that is? If there are bad arguments that are efficacious, why that is? But in any case we're going to, even if we can't answer that why question, which is a very hard question to answer, we're going to look at these arguments and these traditions both internally and externally. You're going to look at them as arguments and you're going to look at them as ideologies, as systems of thought that get trafficked in the political world. And I think that that is another feature of this course that differentiates it from other introductory political theory courses. So I think that gives you something of a flavor of what is distinctive in what we do here. Any questions about any of that? If any of it's puzzling to you it's probably puzzling to somebody else. So in the spirit of getting us going we're going to start with a real world problem. We're going to start with the problem of Adolf Eichmann, who was a lieutenant colonel in Nazi Germany, who was responsible for organizing the shipment of Jews to Nazi death camps. And at the end of the war, he was captured along with a lot of other former Nazis, and he was inadvertently released. They didn't realize that he was a significant player in the organization of the so-called final solution of the Jews, and they released him and he escaped. And like many other former Nazis who escaped he went to Argentina and he lived under an assumed name for many years until the late 1950s when the Israeli Secret Service, the Mossad, figured out that he was there and figured out who he was. And they sent a group of people who, essentially commandos, who captured him, spirited him out of Argentina, took him to Israel where they brought back the death penalty which had not existed at that time in Israeli law, tried him for crimes against humanity, which was the same concept that had been employed at the Nuremberg Trials of his cohort after World War II in the late 1940s, crimes against humanity and crimes against the Jewish people and they executed him. And at that time young political theorist, not particularly well known, called Hannah Arendt, covered the trail for The New Yorker magazine in a series of articles which were subsequently published as a book called, Eichmann in Jerusalem, which is what I'm having you read for Wednesday's class. And we're going to use this Eichmann problem as a way into the central conundrum of the course, which I said is what is it that gives states legitimacy? When should we obey the government, when are we free not to, and when should we perhaps even be obliged to oppose the government? Because this problem was thrown into sharp relief by the conduct of Eichmann during World War II. And I want you to think about two questions while you read this, which is essentially, this book is essentially a compilation of Arendt's New Yorker articles. The first is up here. What I want you to do is to think about what the two things are that make you most uncomfortable about this man, who you'll get know quite well through reading this book. What is it about him that is unnerving? What is it that makes your flesh crawl about this guy? What are the two things that are the most appalling about him? And then the other question I want you to address is the second one. What two things make you most uncomfortable about the events surrounding his apprehension, and his trail, and his execution in Israel? Those are your reading questions for Eichmann in Jerusalem, and write them down, and bring them with you to class on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
9_The_Marxian_Challenge.txt
|
Prof: Okay, good morning. We're going to make the transition today to talking about Marxism, the second of the three Enlightenment traditions that we're going to consider in the first part of this course. And I should warn you at the outset that I have something of a heterodox view of Marx. I have something of a heterodox view of Marx in that if you went and read, for instance, there's a book by a man called Graeme Duncan called Marx and Mill. He basically, his story line is that Marx and Mill operate with fundamentally opposed paradigms, paradigms meaning foundational assumptions, so that Marx and Mill make completely incommensurable and incompatible assumptions about how the world works, and the political theory that can flow from those assumptions. And so to some extent they are, he thinks on all the big questions, speaking past one another. That they're basically you can't adjudicate, if you like, the disagreements between them in part because he thinks they're speaking from fundamentally opposed paradigms. I think that view couldn't be more wrongheaded. Marx and Mill are creatures of the Enlightenment, both, and therefore we will find in examining Marx, that just like all of the utilitarians we've already considered so far, they're both committed to basing politics on a scientific theory of human association, and to committing themselves to individual freedom as the basic and most important principle of politics. Now, so let's, before we get into the details of Marx's own argument, let's say a few general things about Marx as an Enlightenment thinker. And if we think about the Enlightenment thinkers first of all as committed to science there's no question that Marx was committed to a scientific theory of politics. If you read his attack on Proudhon and the utopian socialists, for example, it was all about attacking them for being unscientific in their thinking. It was all about rejecting sentimentalism or wishful thinking in trying to understand what was feasible and what was not feasible in politics. So Marx has a scientific conception. Of course it doesn't come within a country mile of the scientific conceptions we saw both among early Enlightenment theorists like Bentham, or mature Enlightenment theorists like Mill. Rather, Marx is committed to something called the materialist conception of history. And the materialist conception of history is the idea that, to use another one of his rather impenetrable phrases, he's committed to the idea of dialectical materialism. You might say, "Well, what is dialectical materialism?" This comes from the idea first articulated by a German philosopher called Hegel that history moves in a kind of zigzag of fits and starts. The dialectical idea is that some change gets made, some innovation gets made. This, though, breeds a kind of undertow, a resistance against the change, the first change, and then you get as a result of the initial change, the undertow, the kind of pushback, but which isn't the same as a pushback to where you started from, you get a new starting point. So Hegel's famous terms were thesis, antithesis, and synthesis. The thesis is a change. So you get, say, the transition from serfdom to a market-based society. Then you get resistance to that market-based society. You get a new working class comes into being, let's say, and then that working class becomes the agent of a new change that will produce its own antithesis and new synthesis. So history goes like this, if you like. It doesn't go in a straight line, but it goes in a direction. It goes forward in some sense. And Hegel's idea had been that history eventually reaches an ending point, and he thought the ending point was the Prussian state of his day. He thought all of history was evolving toward this supreme highest point of the Prussian state of his day, and it was guided by the working out of ideas in history. Marx turns this on his head; on its head I should say, not on his head. Marx turns it on its head saying, "Well, yes history goes in this kind of zigzag direction of thesis, antithesis, synthesis, and then the synthesis becomes the new thesis and so on, and yes, it has an endpoint (which he thinks is a communist utopia), but it's not driven by ideas. It's not driven by ideas at all. Instead it's driven by material interest." So that's why Marx's view is sometimes called, as I said, dialectical materialism. It's driven by material interest, or as Bill Clinton did put it in the 1992 campaign, "It's the economy, stupid." "It's the economy, stupid." That's the basic idea behind materialism that ideas, culture, beliefs, all of that stuff is what Marx referred to as superstructure. It's not really important. What is important is the economic base, so that economic interests drive everything over time. And if you want to understand how a political system works you better understand the economic system. So in this sense it's a very different view than Bentham, or Mill, or any of these folks because they're really working in the realm of ideas, right? They're not Hegelians to be sure, but they think that this idea of shaping society in terms of their utilitarian calculus can be used in order to reorganize things. For Marx that would be a completely absurd agenda. You have to start with the economy. It's the economy, stupid. You have to understand what the power of forces are in the economy and what the tensions and possibilities are within the economy before you can understand anything else about politics. And in that respect I think one thing you should really get straight right away is just what Marx thought about capitalism. How many people here thought Marx was against capitalism? Marx was against capitalism? Almost nobody? Max wasn't against capitalism? How many think he wasn't against capitalism? One? Why do you think he wasn't against capitalism? Just get to a mic. Student: He wasn't against capitalism because Marx thought capitalism was a necessary step in getting to socialism. Prof: You're exactly right. So what Marx thought about capitalism was, and we're going to understand the reasons for this in detail in the next couple of lectures, that for a certain phase of history it was essential. He thought capitalism was the most innovative, dynamic, productive mode of production that had ever been dreamed up, and there was no way you could even think abut a socialist or a communist society developing unless you had capitalism first. And Marx would have had absolutely no sympathy for the Russian Revolution which was done in a peasant society, or the Chinese communist system either. He would have said they were completely premature because in the end it's going to be capitalism which is necessary to generate the wherewithal to make socialism possible. So he wouldn't have had any sympathy with the Leninist or Stalinist projects, which we'll talk about later. So he's not against capitalism. What he thinks about capitalism is that it's sawing off the branch it's sitting on over time. That is to say there are basic contradictions within the way capitalism works, and indeed the very things that make capitalism the most productive mode of production ever to have existed in human history at one point, those very same dynamics ultimately will undermine it. So in that sense don't think of him as simply against capitalism. He, rather, wants to understand the dynamics process that brings capitalism into being, leads it to maturity, and eventually leads it to self-destruct. And that is the story that he's going to tell us. So he has a scientific theory. It's a materialist theory, and it's based on this metaphor of the base and superstructure. People have come up with other metaphors, skeleton and flesh and so on, but you can play with them. But the core idea is that it's the material relations that shape everything else. More controversially, though, people might say, is to say, "Well, Marx is really a believer in individual rights and freedoms." Most people say, "What? Marx, a believer in individual rights and Marx thinking that freedom is important?" Most people say, "No, Marx is an egalitarian. Marx is all about equality." And one of the things I'm going to suggest to you in my exposition of Marx is that that is basically wrongheaded. You'll see that when we come to talk about his idea of a communist utopia one of his bumper stickers for that is the claim that, "The free development of each is a condition for the free development of all." "The free development of each is the condition for the free development of all." So he's an egalitarian in the sense that, yes, he wants everybody to have freedom, and he thinks that that is denied to many people in most forms of social organization. But freedom is the most important value, nonetheless, and he wants to assure it for everybody. And related to that, you'll see when we come to talk about on Wednesday or next Monday the labor theory of value, that the basic thing that drives Marx is a theory of alienation from our true selves. We can't be free unless we're at one with our true selves. And every system of social organization before communism, in his view, makes it impossible for us to be at one with our true selves. We are alienated from our true natures as productive creatures by the way in which society is organized, and that's the basic problem. So we're denied our capacity for free action, according to Marx, and he thinks we can never realize it until we reach this communist utopia. So the takeaway point is going to be that Marx is an Enlightenment theorist par excellence. Passe people like Duncan it's simply not correct to see him as doing something fundamentally different than the real Enlightenment thinkers such as Mill and Bentham. Marx is an Enlightenment thinker, and when you want to see radical critiques of the Enlightenment you have to go to people like Burke and other anti-Enlightenment thinkers who we're going to be getting to after spring break. A second point related to Marx and individual rights and freedoms is you're going to see that we're going to go back to our discussion of Locke, and you're going to discover that Marx is a true Lockean believer in the workmanship ideal, not in his theory of science, but in his theory of rights and entitlements. Marx is going to embrace a kind of doctrine of self-ownership and the idea that we own what we make as the basis for his theory of exploitation, because his theory of exploitation is going to turn on the claim that people are, in fact, denied the fruits of their own labor because of the way that the system is set up. Well, you could say, "So what? Why should we care that people are denied the fruits of their own labor unless they're entitled to the fruits of their own labor," and indeed that is Marx's view. So you will see that--I was once accused of belittling Marx by calling him a minor post-Lockean, but with respect to the labor theory of value you will see that the main conceptual ideas that go into it are straight out of Locke's Second Treatise and it's straight out of the workmanship ideal. What Marx is going to try and do, when we get to the labor theory of value, is give a secular version of the workmanship ideal. And indeed, developing a viable secular version of the workmanship ideal is a project with at least as long a history as the history of utilitarianism, and at least as fraught with difficulties as the history of utilitarianism. And we will see when we get to Rawls, an even more radical attempt than Marx to develop secular version of the labor theory of value and the workmanship model, and the difficulties it runs into. So Marx, we will see, is the first in a long line of people who tried to secularize the workmanship ideal by creating this labor theory of value that's going to have a rather checkered future as we explore it into the twentieth century. So that's where we're heading. We're going to explore Marx as an Enlightenment thinker by means of three lectures, and the first one is going to deal with Marx and the challenge of classical political economy. Now, why do I say that? Because really there are two Marxes. If you were taking a course in the history of ideas that had more than three lectures on Marx, what you would discover is a lot of attention to his German roots. I've already mentioned that he was a disciple and critic of Hegel, the German idealist philosopher, but much of his writing until the mid-nineteenth century was inspired by his critique of Hegel's other followers and disciples, and we're pretty much not going to deal with that. We're not going to deal with the German Marx. We're not, by in large, going to deal with the young Marx. Rather, what happened to him was Marx thought--you may or may not know this, but in the 1830s there were revolutions across Europe. Kings and queens were kicked out of office and democracy came to power. And Marx and many of those around him thought this is the beginning of the end of capitalism. By 1832 or 1833, all of those democratic revolutions had failed, and the monarchies had been restored across Europe. But then in 1848 there was another whole series of revolutions across Europe, and once again Marx thought maybe this is the beginning of the end, and had great faith and optimism that that was going to be the case. But by 1851 those revolutions had all, again, failed and the monarchs were back in power. And Marx, after that, became steadily more dejected and steadily more skeptical over his remaining years that, in fact, he was going to see a communist revolution in his lifetime. And rather, he invested in the project of trying to understand capitalism in a much more systematic way than he had done in his youth. He was kicked out of Germany. He couldn't go to France and he wound up in London, and he lived the last decades of his life in London, and in fact died and was buried there. If you're ever in London you can go up to Highgate Cemetery on the Northern Line. It's a very interesting cemetery. There are all kinds of interesting people there. George Eliot is there. Anyway, there is Karl Marx in Highgate Cemetery. It's all overgrown and very interesting. Anyway, he spent his last decades working in the basement of the British Museum, and that's where he composed his magnum opus, his three-volume work Das Kapital. It was actually envisaged as a twelve-volume work, and the only volume that was published in his lifetime was actually volume one. The second and third volumes were put together by his friend and collaborator Friedrich Engels. And unfortunately for us the volumes about politics were never written. So you have to, to some extent, put together his mature views about politics from scraps he wrote here and there, and some of the short pieces about politics that do appear in the early volumes. So we're going to mostly focus on the mature Marx. We're mostly going to focus on Marx as he set himself the task of understanding the dynamics of capitalism after he had pretty much given up on his youthful enthusiasm for the quick transformation of the world into socialist and then communist societies, which he had basically started to shed after 1848 with a brief revival of interest in 1870 given events in France, but basically it was a trajectory from youthful optimism into mature pessimism, at least from his point of view. Okay, so what he did was, he said to himself, "Well, what is it that people are involved in-- who are engaged in the serious systematic analysis of capitalism?" And here we tend to present Marx as an unorthodox or radical thinker, but in fact he was a very conventional thinker for his day. That is to say Marx was a follower of Adam Smith and David Ricardo, and he saw himself largely as tackling and refining the theories that they developed. And what I want to do with the rest of our time this morning is say something about the project of classical political economy that Marx inherited and contributed to, and why he thought getting the basic problems that the classical political economists were trying to solve solved would enable us to understand what the real conditions were that would eventually lead to the collapse of capitalism. The most important feature of capitalism, as far as Adam Smith was concerned in The Wealth of Nations and his followers after him, was the division of labor. The division of labor is really important because it has two features. One, as far as Marx will be concerned, it begins the process of alienating us from ourselves. Why does it do that? It does that because, instead of producing things that we then consume, we start producing things in a situation where we divide up tasks and that makes it impossible for us to live rounded lives. That's ultimately where it's going to go. But second, and much more important from the point of view of economics, is that the division of labor is the engine of productivity. The more you engage in the division of labor the more productive you make people. And Smith has this wonderful example right near the beginning of The Wealth of Nations where he talks about a pin factory. Smith has been going around England trying to understand where the dynamism in the English economy is, and he gives this description of a pin factory. He says, One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head. To make the head requires two or three distinct operations. To put it on is a peculiar business. To whiten the pins is another. It is even a trade by itself to put them into the paper. And the important business of making a pin is, in this manner, divided into about eighteen distinct operations. As a result of that division of labor, though, Smith calculated that ten workers could make 48,000 pins a day, but if they had all worked separately and independently all they could have produced was a few dozen. So that basic insight of Smith's, that it's the division of labor that is the engine of capitalist productivity, sets the terms for Adam Smith's analysis of markets in The Wealth of Nations, for Ricardo's refinement, and for Marx's refinement of both of their arguments in Das Kapital. So what was the project of classical political economy? What was it that Marx was stepping into and trying to contribute to? Well, one way of describing it, as I have it here, is, it was the search for theories of natural and market-- I've got theories there twice, sorry-- search for theories of natural and market wages, prices, rents, and profits. They wanted to understand what determines wages, prices, rents, and profits, but they thought you had to have a theory of natural wages, prices, rents, and profits, and market wages, prices, rents, and profits. Now, let's focus on prices, because the way the other three are treated is basically by analogy to the analysis of pricing. A modern neoclassical economist would say, "There is no natural price of anything." A modern neoclassical economist would say, "What determines prices is supply and demand in the market. There is nothing else to say about prices." And they gave up the notion that there's any natural theory of wages, prices, rents, and profits to be found. So a big difference between the classical theorists, and the modern theorists, is this idea that there's a theory of natural prices. Now, why would they think there's a theory of natural prices? The reason they thought there was a theory of natural prices was they were arguing with people like the physiocrats and the trade theorists about where value comes from. What is the source of value? The physiocrats in France said the source of value is the land, that somehow it gets transferred to the products. Whereas the trade theorists, who had been impressed that countries like Holland which had so little land but had become so rich said, "No. Value comes from trade." So they were all sort of saying, "Where is the it? What is the source of value?" And the English theorists following Locke, Petty--Sir William Petty, John Locke, Hobbes, all believed that the way you find value is to go into labor; that work, workmanship is the source of value. So all classical political economists, Smith, Ricardo and Marx believed in the labor theory of value, and they counter posed it to theories based on trade, or the physiocrats' theories that were popular in France, as I've said. So that's one reason, but another is that it's not the case that they were ignorant of the laws of supply and demand. So and we'll see how Marx handles them on Wednesday. It's not the case that they're ignorant of the laws of supply and demand, but they thought it couldn't possibly tell you the whole story. So you can think about it this way. Suppose there is a dearth in the supply of coffee mugs? The price will go up. They didn't disagree with that, right? And if there are too many coffee mugs, more coffee mugs than anybody wants, the price will go down. Maybe somebody will realize you an drill holes in the bottom and then use them to grow plants in, and then maybe the price would go up a bit because there will be more. But they thought that supply and demand fluctuate with what it is that people actually want. They didn't have any problem with that notion, but still in all they thought there must be something that will tell you what the price is when supply and demand are in equilibrium. Another way you can think about it is supply and demand go up and down, but they go up and down around some point, and what tells you what that point is? Okay, that's what the labor theory of value was supposed to do. That's the natural price. I think a modern economist might call it the long-run equilibrium price. That's one way in which if you wanted to find an analogy in modern economic thinking to this classical idea of a natural price, it's the long-run equilibrium price. It's the price of a commodity when supply and demand are in equilibrium. So the labor theory of value was a theory of that. It was going to tell you what the long-run equilibrium price of a commodity would be, okay? And all of the classical political economists were trying to have a theory of that. So when you read "natural prices," that's what you should think. They're trying to understand what-- not marginal changes in the sense of a Pareto diagram, but the long-run changes given the marginal fluctuations around some point. The question is, what is that point? And that's what the labor theory of value was designed to give you. It was a theory, in that sense, of natural prices, not a theory of market prices. They thought you had to have both. Supply and demand gave you the market price. The labor theory of value gave you the natural price. A second big problem that framed the project of classical political economy was that they all believed, as they looked around them, that there was a declining tendency in the rate of profit. This was taken as a given; as a given, widely-accepted empirical fact. And so if it was the case that the rate of profit tends to decline over time, it can be offset by various things and so on which we will talk about, but if it's the case that the rate of profit in capitalist economies declines over time, your theory isn't going to be worth a damn unless you can explain why. So one of the tests, if you like, of a good theory, as far as Smith was concerned, as far as Ricardo was concerned, and as far as Marx was concerned, was it had to explain the declining tendency in the rate of profit. And Smith had an account, Ricardo had an account, and Marx had an account. They all differed somewhat. They all overlapped in some ways. We'll see part of the reason all three of them thought you would have to have imperialism was you needed new markets to offset this problem of the declining tendency in the rate of profit at home, but that was only going to be a temporary stopgap or solution. But in any event, your theory wasn't going to be worth anything unless it could explain why prices are what they are, the theory of long-run equilibrium prices, and second, unless it gave a credible account of why profits fall in capitalist systems over time. Now, a third conceptual point to make that you need in order to understand the project of classical political economy as Marx understood it is actually related to the first point, but I'm just singling it out because people sometimes get confused about it. It's a distinction that Marx makes between use-value and exchange value. Sometimes you'll see Marx uses the word value with a big V. You should just read that as exchange value. So value with a big V is exchange value. So what is the difference between use-value and exchange value? Well, for Marx use-value is utility. That's all it is. Use-value is simply usefulness. And he has a kind of binary theory of use-value. Either things have use value or they don't. So coffee cups have a use-value, you can drink out of them. If somebody drilled holes in the bottom of them they would have no use-value until somebody came along and said, "Well, we can use them to grow plants," then they would have use-value. So things either have use-value or they don't, and I'll come back to that in a minute. And that's going to affect the supply and demand, but it's not going to explain the price. It's not going to explain the long-run equilibrium price. That is the exchange value and that is what we learn about from the labor theory of value as far as Marx was concerned. So it's the labor theory of value explains the price, and is not to be confused with the use-value or utility of an object. Again, one important difference, and maybe the most important difference between classical political economy and neoclassical political economy is, we will see later, the neoclassical people shed the labor theory of value and so there is no exchange value independent of utility, but we'll get to that. That's getting ahead of ourselves. When you're concerned with the classical formulations, and in the nineteenth century, exchange value or price, or value with a big V, is determined by the labor theory of value and use-value is usefulness. And that brings me to Marx's definition of a commodity. A commodity has a very special meaning for Marx. It's something that is produced for exchange. So if you plant an apple tree and you grow apples in order to eat them, you're producing for consumption, those apples are not commodities. But if you plant an apple tree and grow apples and sell them, then those apples are commodities, right? And something is a commodity if it's produced for exchange. And that's very important because once you have a division of labor you have more and more commodity production. And you might recall in my very first lecture I said that one of the things Marx shares in common with Bentham is he's somebody who pushes the idea he has to the absolute extreme and then even beyond that. So what we're going to see happen with Marx and commodification is exactly analogous to Bentham and utility. That Bentham says, "How would the world look if utility maximization was the only game in town, if there was nothing else at all?" and so his theory runs. With Marx it's, "How would the world look if commodification was the only game in town?" Everything in a capitalist system becomes a commodity, even the worker himself. Even the worker himself. And when we start to understand the dynamics of capitalist production we'll see that the analysis of the value of a worker is no different than the analysis of the value of the pin that the worker produces in Adam Smith's factory. So he takes this idea of commodification and pushes it to the hilt. So I think if you think about those four features of the project of classical political economy it gives you a sense of what was in Marx's head as he set off to try and understand the dynamics of capitalist systems. And what he did was, he said, "Well, we have to understand, first of all, how capitalist systems look to the participants, because the way in which they look to the participants is not the same as the way in which they really work." So there's a basic distinction, if you like, between appearance and reality. The way in which people think market systems work isn't the way in which they actually work. So it's a little like Bentham saying, "You might think you're not motivated by utility and all the rest of it, but in fact you are, and the rare case is we get it right, not the rare case is that we get it wrong." In this sense Marx, like Bentham, is an objectivist. He thinks he's describing scientifically the laws of motion of capitalist systems regardless of whether the participants in those systems understand these laws of motion, and for the most part they don't. Indeed he's going to claim ultimately that a socialist revolution differs from all others because for the first time in history, people understand the social relations that they're part of, and so can self-consciously transform them and create an unalienated social order. So at the beginning of volume one of Kapital he's basically saying, "How does it look to the participants?" Well, if you have a division of labor, you look around, what do you actually see? You see people producing commodities. They exchange those commodities for money, and then they use that money to buy other commodities, which they consume. So you have these cycles of exchange; commodity, money, commodity, right? So the person working growing apples, sells the apples for money, and then uses the money to buy milk and eggs and so on, in the simplest case, right? So if someone landed from Mars in the middle of a capitalist economy, that's what they would see; all of these people producing things, exchanging those things for money, and buying other things which they then consume, right? That was the idea. But then if you look a little bit more carefully, you see that actually some people are doing something entirely different. Most people are producing commodities, exchanging the commodities for money and then using the money to buy other commodities that they consume. But Marx says, "Some people are doing something different. They're taking some money, buying a commodity (in this case the labor of the worker for a certain time) and then selling what gets produced for more money." So they're not interested in consumption it doesn't seem, at least at first sight. And indeed, if you look even more closely you'll see that when they do that, the money they end up with is more than the money they started with. M prime is bigger than M. And the question, the organizing question of Das Kapital-- and it was the question that preoccupied Adam Smith and David Ricardo before Marx, and as I said, in this sense he was completely orthodox classical theorist, his radicalism comes in later. The question is where does prime come from? What makes profit possible? You're not going to explain why profits decline, after all, if you can't understand where profit comes from. What is the origin of profit? That's the basic problem. The transformation of money into capital has to be developed on the basis of the immanent laws of the exchange of commodities, in such a way that the starting point is the exchange of equivalents. The money-owner who is as yet only the capitalist in larval form, must buy his commodities at their value, sell them at their value, and yet at the end of the process withdraw more value from circulation than he threw into it at the beginning. His emergence as a butterfly must, and yet must not, take place in the sphere of circulation. These are the conditions of the problem. A very famous passage at the beginning of Kapital telling us that if you want to understand why the capitalist winds up with more you have to understand the source of value, because there's no cheating; equivalents exchange for equivalents in every one of those cycles. It's not that somehow the worker is tricked into selling his labor power for less than its worth. On the contrary, he's paid exactly what its worth. So once you can understand that puzzle of how, when equivalents exchange for equivalents, new value is still nonetheless created, then you understand the secret to where profit comes from, you know the source of value, and you can begin the process of understanding the dynamic productiveness of capitalism and why it will eventually start to fall apart. And we will dig into those subjects on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
19_The_Burkean_Outlook.txt
|
Prof: So this morning we're going to start talking about Edmund Burke and the anti-Enlightenment. And one prefatory note is that when thinking about political theory as opposed to everyday political argument I think it's very important not to get hung up on labels such as left wing, or right wing, or liberal, or conservative. And I think the occasion of beginning to speak about Burke is a good moment to make this point. After all, I think it'd be fair to say that before you walked into this course if you had looked down the syllabus and somebody had said, "Who is the most radical thinker on this syllabus?" most of you would have picked out Marx. But as we've seen, Marx is actually a footnote to the Enlightenment. Marx is not, he's not somebody who engages in a radical departure from the ideas that were developed by Locke and the other thinkers that shaped the main ideas of the Enlightenment. Burke, on the other hand, is generally thought of as a conservative politically, and indeed he was a conservative politically, but philosophically he's a much more radical thinker than Marx was. He is somebody who really goes to the root of accepted assumptions in his critical questioning. Burke completely rejects the Enlightenment project as I have described it to you today. Let me say a little bit about who he was. He was born in 1829, so that makes him, I mean 1729, sorry. I gave him a hundred years there. He was born in 1729, a quarter of a century after Locke died, and the main work for which he is most known, his Reflections on the Revolution in France, was published in 1790 almost exactly a century after, actually more like 110 years after Locke's Second Treatise. Well, I should say it was published a hundred years after, but it was written a 110 years after because we now know that Locke wrote The Second Treatise in the early 1680s. But what motivated Burke to write his reflections on the French Revolution was the appalling carnage that eventually resulted from the French Revolution. The French Revolution was not planned as a revolution. It was really street riots that escalated in Paris, but escalated to the point of the complete destruction of the whole society, the inauguration of a massive terror, which appalled Burke. And so he wrote this, what started out as a pamphlet, but became this very famous book on the Reflections on the Revolution in France , and that becomes a basis of Burke's outlook. He wasn't a professional scholar or academic. He was actually a public person. He would eventually become a Member of Parliament and has some things to say about democratic representation that I will come back to when we get to the theory of democracy. But at the time he wrote the Reflections on the Revolution in France, which is what I had you read excerpts from today, he was mainly preoccupied with what had happened, what had transpired across the Channel in 1789. And he was, in particular, concerned to establish against people like Richard Price, who's one of the people who he engages there, that 1789 was in any sense a logical follow-on of 1688 in England; 1688, of course, when we had the revolution in England, the glorious revolution of 1688 when William was put on the throne, which Locke defended, but from Burke's point of view that was a minor palace affair not a fundamental or radical revolution. And in this sense Locke's view--I'm sorry, Burke's view of the English Revolution, for those of you who are historians here you might be interested to know, is very much at odds with the big new book called 1688 just recently published by Professor Pincus in the history department here, a very interesting book which argues that 1688 was a much more radical break with the past than people thought at the time, and certainly than Burke thought because Burke thought that 1688 was not a radical break with the past whereas 1789 in France was a radical break with the past. And I think that another thing to say before we get into the particulars of Burke's view is that, unlike everybody else you've read in this course, Burke really does not have a theory of politics. He does not have a set of premises that you can lay out, conclusions to which he wants to get and then change of reasoning that get him from A to B from the premises to the conclusion. There is no theory of politics in Burke. With Kant we talk about universalizability. Locke we talk about this commitment to principles of scientific certainty. Burke has, rather than a theory, he has an attitude or a disposition, an outlook, and that outlook is informed first and foremost by extreme distrust not only of science, but of anybody who claims to have scientific knowledge. He thinks that human society is way too complicated for us ever to get completely to the bottom of it. That we are kind of carried along on a wave of very complicated history that we understand only dimly, if at all, and that that's not going to change. The human condition is a condition first and foremost, of fumbling in the dark. He says, just to give you a flavor of this), "The science of constituting a commonwealth, or renovating it, or reforming it, is, like every other experimental science, not to be taught a priori." So here you can see a complete resistance to the logical reasoning that drove Hobbes and Locke in thinking about the structure of mathematics and a system of axioms of the sort Bentham tried to come up with. "No," says Burke, "Nor is it a short experience that can instruct us in that practical science; because the real effects of moral causes are not always immediate, but that which in the first instance is prejudicial may be excellent in its remoter operation; (so when we think we see something bad it might be having a good effect) and its excellence may arise even from the ill effects it produces in the beginning. The reverse also happens; and very plausible schemes, with very pleasing commencements, have often shameful and lamentable conclusions. In states there are often some obscure and almost latent causes, things which appear at first view of little moment, on which a very great part of its prosperity or adversity may most essentially depend." So the world is fundamentally mysterious and murky. And things that look good might have bad consequences. Things that look bad might have good consequences. The effects of our actions are going to be realized in the distant future in ways that we can't possibly imagine. And so that being the case the most important characteristic of thinking about politics is caution. We should be cautioned. "The science of government being, therefore, so practical in itself, and intended for such practical purposes, a matter which requires experience, and even more experience than any person can gain in his whole life, however sagacious and observing he may be, it is with infinite caution that any man ought to venture upon pulling down an edifice which has answered in any tolerable degree for ages the common purposes of society, or on building it up again without having models and patterns of approved utility before his eyes." So what they did in the French Revolution was the antithesis of what Burke recommends, because they swept everything away and decided to build again tabula rasa. Burke is deeply suspicious of all attempts to do that and he thinks they'll end in disaster because the people who undertake them will not know what they're doing, and even more dangerous, they're not smart enough to know how dumb they are. They're not smart enough to realize that they really do not know what they're doing. They're not smart enough to understand that they will unleash forces which they will not be able to control. So Burke is, in that sense, a conservative who thinks about social change in a very cautious and incremental way. He's not a reactionary in the sense of being someone who's opposed to all change. He's a conservative. I think one of the nice definitions of conservatism in Burke's sense was actually put forward by Sir Robert Peel in the nineteenth century when he said-- he defined conservatism as, "Changing what you have to in order to conserve what you can." Changing what you have to in order to conserve what you can, as distinct from a reactionary view which would be just flat resistance to all change. Now, of course, this idea of conservatism as valuing tradition is very different from the libertarian conservatism of Robert Nozick that we looked at earlier in the course. The libertarian conservatism of Robert Nozick is anti-statist, anti-government, and resistance to authority being imposed on you, hence the notion of libertarian conservatism. Burke is a traditionalist conservative. He thinks that tradition is the core of human experience, and he thinks whatever wisdom we have about politics is embedded in the traditions that we have inherited. "They have served us over centuries," this is his view writing at the end of the eighteenth century, "they have served us for centuries. They have evolved in a glacial way." As I said, people make accommodations to change, but only in order to conserve the inherited system of norms, practices and beliefs in institutions that we reproduce going forward. So that's the sense in which it's a conservative tradition; to conserve, the basic meaning of the word conserve, conservative. And so science is a really bad idea when applied to political and social arrangements because there isn't scientific knowledge, and anybody who claims to have it is either a charlatan or a fool, perhaps both. And so, as I said, he doesn't have a theory because he's skeptical of the very possibility of having a theory. He thinks we should, as Clint Eastwood says-- I've forgotten in which movie it is, I think A Fistful of Dollars, maybe--"A man's got to know his limitations. Are you feeling lucky?" A man's got to know his limitations, Burke thinks that in spades. He thinks we have to understand that our grasp of the human condition is very limited and it's going to stay that way. So, on the first of our two prongs of the Enlightenment endeavor he's completely out of sympathy. Now what about the second? What about the commitment to this idea of the importance of individual rights? We saw how this developed initially in Locke's formulation in a theological way when Locke argued that God created us with the capacity to behave in a God like fashion in the world. Each individual is the bearer of the capacity to create things, and therefore have rights over his or her own creation. In Locke's view we're all equal. We're equal in God's sight. He creates us all equally, and we're all also equal in the sense, very important for Locke, that no earthly power has the authority to tell us what the scripture says. Each person must do it for himself, and when they disagree they have to either find a mechanism to manage their disagreement, or if they can't, look for their reward in the next life. But basically each individual is sovereign over themselves. And that's where modern doctrines of individual rights come from. We saw how that played out with the workmanship ideal, Mill's harm principle all the way down through Nozick and Rawls. Bentham has, I'm sorry; Burke has a very, very different view of the idea of rights. They're first of all, they are inherited. They're not the product of reason or any contrived theoretical formulations. They're inherited. "You will observe that from Revolution Society to the Magna Carta it has been the uniform policy of our constitution to claim and assert our liberties as an entailed inheritance derived to us from our forefathers, and to be transmitted to posterity-- as an estate specially belonging to the people of this kingdom, without any reference whatever to any other more general or prior right. By this means our constitution preserves a unity in so great a diversity of its parts. We have an inheritable crown, an inheritable peerage, and a House of Commons and a people inheriting privileges, franchises, and liberties from a long line of ancestors." So what we think of when we talk about rights for Burke, first of all, they're not human rights or natural rights for him, they are the rights of Englishmen. They are the rights of Englishmen; they are particular rights. They're the result of a particular tradition. The idea that there could be universal rights doesn't make any sense. It's not an intelligible question, as far as Burke is concerned, to assay what Rawls would say, what rights would we create for all people in some abstract setting? It doesn't make any sense to him. So it's the rights of Englishmen. And indeed, when Burke was sympathetic to the American Revolution, not the French Revolution, it was because he thought that the rights of the American colonists as Englishmen were being violated by the English Crown. And he was also sympathetic to claims for home rule for Ireland, again, on the same sort of basis. But it's this entailed inheritance, what we have been born into as a system of rights and obligations that we reproduce into the future. And those rights, above all, are limited. Again, just as our knowledge of the world is limited so our rights, in the normative sense, are limited. "Government is a contrivance of human wisdom to provide for human wants. Men have a right that these wants should be provided for by this wisdom. Among these wants is to be reckoned the want out of civil society, of a sufficient restraint upon their passions." We have a right to be restrained, a very different notion than a right to create things over which we have authority, a right to be restrained. "Society requires not only that the passions of individuals should be subjected, but that even in the mass and body, as well as in the individuals, the inclinations of men should frequently be thwarted, their will controlled, and their passions brought into subjection. This can only be done by a power out of themselves, and not, in the exercise of its function, subject to that will and to those passions which it is its office to bridle and subdue. In this sense the restraints on men, as well as their liberties, are to be reckoned among their rights." The restraints on men, as well as their liberties, are to be reckoned among their rights. "But as the liberties and the restrictions vary with times and circumstances and admit to infinite modifications, they cannot be settled upon an abstract rule (take that John Rawls); and nothing is so foolish as to discuss them upon that principle." So we have a right to be restrained. We have a right, most importantly, that others are going to be restrained, and that our passion should be controlled is something that he insists is an important part of what we should think of under the general heading of what it is that people have rights to. "One of the first motives to civil society, and which becomes one of its fundamental rules, is that no man should be the judge in his own cause. By this each person has at once divested himself of the first fundamental right of uncovenanted man, that is, to judge for himself and to assert his own cause." That's not that different from Locke, that first part. After all, Locke talks about the state of nature as being exactly a state in which we get to judge in our own cause, but for Locke we give it up in a conditional way. We never lose the right to revolution if society doesn't protect us, and that's what he thought was triggered in 1688. Burke says no. "He advocates all right to be his own governor. He inclusively, in a great measure, abandons the right of self-defense, the first law of nature. Men cannot enjoy the rights of an uncivil and of a civil state together. That he may obtain justice, he gives up his right of determining what it is in points the most essential to him. That he may secure some liberty; he makes a surrender in trust of the whole of it." This, to some extent, has a Hobbesian flavor that Hobbes says, "If we don't have law we'll have civil war, and so we have to give up freedom to authority." The difference is even in Hobbes's formulation there's ultimately the recognition that if society does not provide you with protection you have a reasonable basis for resistance and for overthrowing it. But in Locke's case, I mean, in Burke's case he doesn't want to concede even that. Because we cannot, once we've made the transition into civil society, we cannot go back. There is no turning back. We are part and parcel of this system of entailed inheritances and that is the human condition all the way to the bottom. He doesn't reject completely the metaphor of the social contract, but he makes it indissoluble. He says, "Society is indeed a contract. Subordinate contracts for objects of mere occasional interest may be dissolved at pleasure (if I make an agreement with you to do something we can agree to dissolve our agreement)-- but the state ought not to be considered as nothing better than a partnership agreement in a trade of pepper and coffee, calico or tobacco, or some other such low concern to be taken up for a little temporary interest, and to be dissolved by the fancy of the parties. It is to be looked on with other reverence (the "it" here is the state) - because it is not a partnership in things subservient only to the gross animal existence of a temporary and perishable nature - it is a partnership in all science; a partnership in all art; a partnership in every virtue, and in all perfection." "As the ends of such a partnership cannot be obtained in many generations, it becomes a partnership (now this is the most famous sentence Burke ever wrote) not only between those who are living, but between those who are living, those who are dead, and those who are yet to be born." A very different idea of the social contract, partnership between those who are living, those who are dead and those who are yet to be born. "Each contract of each particular state is but a clause in the general primeval contract of eternal society." So, the "law is not subject to the will of those (this is a flat rejection of workmanship), who by an obligation above them, and infinitely superior, are bound to submit their will to that law. The municipal corporations of that universal kingdom are not morally at liberty at their pleasure, and on the speculations of a contingent improvement, wholly to separate and set asunder the bonds of their subordinate community, and to dissolve it into an unsocial, uncivil, unconnected chaos of elementary principles." So one way of just driving home the radical break here between his thought and the social contract theorists is to mention that one of the standard criticisms that often gets made of social contract theory is, well, even if there was a social contact, you know, some people think of the adoption of the American Constitution as a kind of social contract. After all it was ratified by the states. Actually, the Articles of Confederation had said it had to be unanimously ratified, and they couldn't get that, so they changed it to three-quarters of the confederacy states. Still, there was an agreement of some sort, and it was ratified and so on, but people have often said, "Well, so what? So those people in the eighteenth century made an agreement. I didn't. What has it got to do with me? Why should it be binding on subsequent generations?" And that's often been a critique of the idea of the social contract. Burke turns that reasoning on its head. He says, "Once we see that this social contract is multi-generational between the dead, the living, and those who are yet to be born, who are you (any given individual), who are you to think that you can upend it? What gives you the right to pull the rug out from under this centuries-old evolving social contract? What gives you the right to take it away from those who haven't even been born who are part of this (he even uses the word eternal) eternally reproducing social contract." So it's a sort of mirror image of the critique which says, "Well, we never made it so why should we be bound by it?" He says, "It preexisted you, and you're going to predecease it, and you don't have the right, you don't have the authority to undermine it because any rights you think you have are the product of this evolving contract, they're contained within it." So society is not subordinate to the individual, which is the most rock-bottom commitment of the workmanship idea. On the contrary, the individual is subordinate to society. Obligations come before rights. We only get rights as a consequence of the social arrangements that give us our duties as well. So whereas the Enlightenment tradition makes the individual agent the sort of moral center of the universe, this god-like individual creating things over which she or he has absolute sovereign control, is replaced by the exact mirror image of the idea of an individual as subordinate to inherited communities, traditions, social arrangements, and political institutions to which he or she is ultimately beholden. If there was a pre-collective condition it's of no relevance to us now because we can't go back to it, and any attempt to try, look across the English Channel and see what you're going to get. That is the Burkean outlook in a nutshell, and it is, as I said, the most fundamental critique of the Enlightenment it's possible to make. And even though the Enlightenment tradition, as we have studied it here, was unfolding in the seventeenth, eighteenth, nineteenth and twentieth centuries, this anti-Enlightenment undertow has always been there as well. Not to make the metaphor do too much work, but you can really think of every wave of advancement in Enlightenment thinking washing down the beach and producing an undertow of resistance and resentment against it, both philosophically, and I'm going to start talking in a minute about twentieth-century Burkean figures, but also politically. One story about the rise of fundamentalism, and jihadism, and ethnic separatism is this is all part of the political undertow against the current form that the Enlightenment political project is taking, which is globalization, homogenization, this sort of McDonald's effect on the world, produces this backlash against globalization where people affirm primordial-looking attachments, even though there's probably no such thing as a genuinely primordial one, separatists, partial affiliations and allegiances, connections to doctrines which deny the scientific and rational project of the Enlightenment. And so, just as globalization has been advancing we've seen a resurgence of separatists, religious fundamentalists, nationalists, and other kinds of identities. Quite the opposite, for example, of what Marx predicted. Marx predicted that things like nationalism, sectarian identifications, would go away, and Lenin too. They thought that as the principles of capitalism defused themselves throughout the world, things like national attachments would go away. And indeed on the eve of the First World War there was the Second Communist International where they basically came out and said to the workers of Europe, "Don't get involved in this national war. It's not in your interest. You have a common class interest across nations against the interest of employers across nations," and of course this fell on completely deaf ears. In 1916 the Second International pretty much disintegrated. And, in fact, one of the big paradoxes of the twentieth century has been the persistence of things like nationalism through the first two world wars and then in the last part of the twentieth century, this resurgence of religious and other forms of traditionalist attachment that are fundamentally antithetical to the Enlightenment project. So the Enlightenment has always produced reaction, undertow, rejection, often from the people who don't benefit from it, and it's one of the ways in which I think the proponents of the Enlightenment have always been politically na�ve. They've always thought that as modernization and Enlightenment diffuses itself throughout the world these kinds of primitive thinking will go away. Well, it turns out that they don't, and so one of the big tasks of political science at the present time is to try and understand why, to try and understand what the dynamics of political affiliation and identity attachment really are. And so that's a Burkean agenda. Now if you fast-forward from Burke to the middle of the twentieth century, I had you read a short piece, very famous and important piece, by Lord Devlin who was an English judge. Like Burke, someone with Irish origins, though some certain amount of ethnic ambiguity in both cases there about just how much Irish and just how much English, but we needn't detain ourselves with that in this course. And he was commenting upon something called the Wolfenden Report, which was published in 1959 by a commission that had been asked to tell the British Parliament what it should do about homosexuality and prostitution. And the Wolfenden Report had said, "The laws against them should be repealed. They should both be legalized on the grounds (they didn't use these terms but this is the basic thought or the term we would use today) that both homosexuality and prostitution are victimless crimes." They are, to use the jargon of our course, Pareto-superior exchanges. They're voluntary transactions among consenting adults that don't harm anybody else. And of course this was put in a different idiom because it was the 1950s, but that was essentially the point. They don't harm anybody, so it's just traditional prejudice, bigotry that leads us to outlaw these things and we shouldn't do it. That was what the Wolfenden Report had said. And Burkean-to-the-core Lord Devlin says, "No!" I don't know how caught up you are in the reading. Anyone who has read Burke--I'm sorry, Devlin, tell us why he thinks this. Yeah? We need to get you the mic. Why he thinks, why is it that Lord Devlin thinks that the mere fact that there's no harm is not enough of a basis for legalizing homosexuality and prostitution. Yeah? Student: He claims that it's not an attack against the individual but a harm against society. Prof: So what does that mean, though, when you say it's a harm against society? How do you unpack that in your own mind? Student: I guess it's maybe an attack against the morals that society tends to agree to. Prof: Yeah, well, agreed. Let's put brackets around agreed. It's not what we mean by it, but certainly the morals that are there. And where do they come from? Where do those morals, I mean, so we have a moral code that says homosexuality and prostitution are wrong, but where does that come? Anyone? Yeah? Student: Well, he put a lot of weight on the basis of religion for driving one's morals. Prof: Correct, religion, an interesting--look what he says about religion. He says, "Morals and religions are inextricably joined-- the moral standards generally accepted in Western civilization being those belonging to Christianity. Outside Christendom (there's a 1950s word, we don't say Christendom anymore, do we?) other standards derive from other religions." Outside Christendom other standards derived from other religion. "In England we believe in the Christian idea of marriage and therefore adopt monogamy as a moral principle. Consequently the Christian institution of marriage has become the basis of family life, and so part of the structure of our society. It is there not because it is Christian (this comes to the point about whether we've agreed). It has got there because it is Christian, but it remains there because it is built into the house in which we live and could not be removed without bringing it down." It's there not because it's Christian, it got there because it's Christian, it's a matter of history. It was a Christian civilization. So we have a Christian conception of morality, but he's not saying it's true. He's not saying that the Christian set of beliefs about religion is true. He has no interest in the question of whether or not it's true. He's saying here, "A different society might be glued together by a different religion which wouldn't create monogamy. It might create polygamy, and that would have its own history and its own system of rights and institutions and everything that goes with that." So it's conservative in the sense of affirming tradition, but not conservative in the sense of saying there are absolute moral values. Neither Burke nor Devlin ventures any opinion on that subject. They say it's not even really important. What's important is that the people in the society believe in these values. And if the people in this society don't believe in some system of values as authoritative, the society will fall apart. You can't put together a society just on the basis of interest. It needs more. It needs moral glue. So these folks, you could say when I say they don't really have a theory in the sense that we've looked at theories up until now in this course, it's because you could say, "Well, they're not political theorists. They're really sort of sociologists. They're really sociologists of stability because they're saying that it's necessary for a society to be stable that it's held together by this kind of moral glue of authoritative opinion." So when you say to Lord Devlin, when he's defending the outlawing of homosexuality and prostitution, "Well, that's just your bigotry," his answer wouldn't be to deny that it's in some absolute sense an irrational position, but he would say, "Every society needs its bigotry. Every society needs its prejudices." And so he doesn't appeal to rationality, but he does appeal to what he calls reasonableness. And what is reasonableness? It's basically the system of beliefs, as he puts it, "of the man on the Clapham omnibus." We might today say the woman on the A train reading the New York Post. The prejudices of the average person that is the basic yardstick, and if the average person is appalled by some practices, then they should be illegal. And that's the beginning and end of it. So what about that? You could fast-forward it since he talks about homosexuality and what we call gay rights today. If you look at the American trajectory, in 1986 this came up before the Supreme Court in a case called Bowers versus Hardwick, and they essentially took the Burke-Devlin position. That is that states should be allowed to outlaw homosexuality because most people find it deplorable. A couple of years ago it came back to the court and they said, "Well, mores have evolved enough since 1986 that we're going to overturn Bowers versus Hardwick," very Burkean. They're following the man on the Clapham omnibus. They're following the woman on the A train's prejudices, beliefs and values, and that's as it should be. What about that? How many people find this appealing? Only two? How many people find it unappealing? So we still have at least half undecided. What's unappealing about it? Yeah? Student: > Prof: Take the microphone. Student: According to his perspective we might still have a system of slavery in this country. Prof: According to this perspective we would still have slavery in this country. Well, I think he wouldn't concede the point that quickly. He would say what I just said about Bowers versus Hardwick that if the views of the man on the Clapham omnibus evolve enough, then we can recognize change. Now you might want to not accept that because what if they happen before--Yeah? Student: Yeah, to refute that I would just say that our morals and our ideas of what is right and wrong are shaped by the systems that we were born into and consequently I feel like Burke and Devlin's system ascribes a great deal of value to the moral conceptions at the beginning of society and that almost leads us to a system of stasis in terms of our morality. There seems to be too much stasis and no ability to reevaluate given how our moral systems are shaped. Prof: I think that's right, and we will pick up with this on Monday, but if you think that the basic society structure is okay you're likely to find this doctrine appealing, but if you think the basic structure of the society is deeply unjust then you're likely to be affronted by this outlook because one person's reasonable morality is another person's hegemony, and we'll start with that idea next time.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
13_Appropriating_Locke_Today.txt
|
Prof: An important transition today. We're going to start talking about the social contract tradition. We've come a long way. It's still only February, but you've really worked through two of the major Enlightenment traditions of thinking about political theory. To get into the social contract tradition we have to go back again to Locke, who is a kind of hovering presence throughout the whole course, and Locke's idea of workmanship, which we thought about mainly in relation to creation of objects of value, the theory of property in Locke, and how that plays itself out in the labor theory of value developed by Marx that we've been talking about for some time over the past couple of weeks. Well, alongside the economics of workmanship, if you like, that goes back to Locke, there's a politics of workmanship, because just as we own what we make in the material sense, in Locke's story, we also own the political institutions that we create. And if you cast your minds back to the early lectures I gave on the Enlightenment project a big part of that claim was that we could have maker's knowledge of the political institutions we create just as we have maker's knowledge of the economic and social relations that we create. And not only do we have maker's knowledge so we can have this kind of certainty the early Enlightenment theorists were looking for, but we also have maker's authority over what we create. Remember I told you that Locke was engaged in this debate with other natural law theorists in the 1660s, where the big question was whether or not God is omnipotent. And the question was that if you said God is omnipotent it seemed to threaten the timelessness of natural law because he'd be in a position to change it. On the other hand, if you said that God does not have the power to change natural law that seemed to undermine his omnipotence. So either the timelessness of natural law or the omnipotence of God, but it seemed like you couldn't have both. And Locke wrestled with this question his whole life, but in the end came down on what's sometimes called the voluntarist side, the will-dependent side, the side which got him into the whole workmanship ideal. So we know what we make and we own what we make because God gave us the capacity to be able to operate as miniature gods and to operate in a god-like fashion creating institutions, and economic relationships, and social relationships over which we have the same kind of control with the caveat that we, as God's property, were bound by God's wishes which appear to us as natural law. Whereas, in the things we create, so long as we don't violate those constraints, us as God's creatures, we can do what we like. And that is the idea that informs Locke's argument in The Second Treatise where he says, to remind you, there's no earthly authority that can settle disputes about natural law and so it's every person for themself if there really is a disagreement, and I'll come back to the implications of that more fully later when we talk about majority rule. But the most important point to take away for right now is that we are constrained by natural law in the natural law tradition because we're God's property, and he has maker's knowledge over his creation which we are, but with the constraint or with the caveat that he rejects the traditional idea-- he was arguing against Filmer and others about that there's anybody, whether it's the Pope, or the King, or anybody else, who has the right to say what natural law means in the case of a dispute. And that's the beginning of modern individualism for Locke; that we're miniature gods on the one hand, and yes, we're constrained by natural law, but it's not much of a constraint, really, because we're the ones who get to say what natural law means in a disputed circumstance. So the individual comes into the center of thinking about politics. Now, Nozick, Robert Nozick, is the person we're going to use to ease our way into modern social contract theory. In a way it's an odd choice because Nozick is actually a critic of John Rawls, who we're going to read next, and probably wouldn't have even written the book that he wrote but for the existence of Rawls's book to which he was reacting. But I'm choosing to do Nozick first for pedagogical reasons. One is because Nozick explicitly builds off a Lockean set of intuitions, and secondly because his formulation of the social contract is another variation of the rights-utility synthesis that we looked at in connection with Mill, but it's one that purports to solve the main problem that we were left with in Mill, which is the Marxian critique of markets. If you recall from the end of our discussion of Marx, we said that one of the things that is left untouched by all the problems with Marx is Marx's critique of markets from the standpoint of the status quo, the starting point's problem, the garbage-in/garbage-out problem. That if you have an unjust status quo, and then a lot of market transactions, all those transactions are going to be infected by the injustice that goes into the status quo, If the corn dealer really is a starver of the poor, to use Mill's example in the chapter on free speech, Nozick thinks he has an answer to that. Nozick thinks that he has a reformulation of the rights-utility synthesis that answers that problem, and we'll only get to how he thinks he answers that problem next Monday, but he thinks that he has an answer, and in fact it's a brilliant answer. And even those of you who don't like that answer are going to find it very difficult to respond to Nozick's point. But before getting into the details of Nozick's argument I think it's worth just pausing for a minute to notice two problems that any social contract theory has, and has long been known to have, which anybody who tries to come up with a social contract theory has to deal with. It's long been known, first of all, that because there never was an actual social contract, that assumptions you make about natural man tend to be loaded with assumptions about human beings from your own society. This critique is maybe not as old as the hills, but it's certainly as old as Rousseau's critique of Hobbes. Writing in the eighteenth century, Rousseau said, the problem with Hobbes' Leviathan, which is another version of the social contract that we're not reading in this course-- Rousseau said the problem with Leviathan is that Hobbes takes assumptions from the people of his own day, and attributes them to natural man, projects them on to the idea of natural man, and you can't do that. And so Rousseau's own version of the social contract is much more historical. It attempts to take account of the evolution of people from some kind of almost pre-human condition to a condition in which social and political relationships evolve. But this claim is made time and again when people talk about the social contract as moving from a condition of pre-natural, pre-political man to a social and political condition people say, "Well, the problem with it is Rousseau's problem with Hobbes." That what people in fact do is take a set of assumptions about human beings that are the product of the society that's been created and attribute them to natural man. And so you get this problem that what you put into it is--it's a different but related kind of garbage-in/garbage-out problem. The assumptions you make about human nature are taken from the society you want to justify, and then you present them as though they were features of pre-social, if you like, human nature. And so the argument goes. What we now know from 200 years of anthropology is that actually Aristotle was right. There never was a pre-social condition. Human beings are naturally social creatures and you can't analyze them apart from their social and cultural environment. You just can't do it. Anything you try to do in that general direction will be--it will obscure more than it reveals. You'll end up with tendentious assumptions about human nature that will allow you to derive the conclusions that you want, but at the end of the day this is only going to persuade the people who agreed with you before you began and it's not going to convince the skeptics, so what's the point? So this is one problem that has long been known to attend social contract theory, and anybody who's going to roll one out has to deal with it. And what you find with the modern social contract theorists like Nozick and Rawls is they say, "Granted, we don't disagree that there never was a social contract." And indeed, even if you look at the-- those of you who are students of American history will know even if you look at the social contract that came the closest to being an actual social contract to create a society, namely the creation of the American Republic, even there if you look it didn't come about as the result of an agreement. The Federal Constitution was put in place in violation of the Articles of Confederation. It was largely imposed by the Federalist forces who were stronger than the anti-Federalists, than the Confederation forces. So even the American example, which is the one that inspires Nozick's book, is not a case of a creation of a society from whole cloth by agreement; it's neither out of whole cloth because there were already social and political arrangements before, and it's not really by agreement because it was imposed on those who didn't want it pretty much by force. We'll come back to the extent to which it was imposed later, but the basic fact remains. Nozick knows all of that. He's not a fool, and he's not ignorant, but he says, "Let's suppose society could be created by unanimous agreement, could be created by consent, what would it look like?" And we're not saying pre-political, pre-social beings, people like you and me. If we were in a situation where there was no state what kind of state, if any, would we create? And we'll see when we come to do John Rawls, after the break, he has the same approach. People often get Rawls and Nozick wrong. They're not saying, "We can imagine what people would be like if a state didn't exist in the sense that Hobbes did and Locke did, and Rousseau criticized them for doing." Rather he's saying, "You and me, considering ourselves as we are, if the state didn't exist what kind of state would we create?" We know it's a thought experiment. We know it's a hypothetical exercise; nonetheless, if we can answer that question, it gives us a standard by which we can compare existing institutions that didn't come about in that way from the standpoint of this normative ideal of consent. What would people have chosen, even though they never did choose it, tells us something about existing institutions, namely those that are closer to what people would have chosen are better, and those that are further from what people would have chosen are worse. So it's the search for a normative ideal based on a hypothetical contract. Whereas, Hobbes and Locke thought the state of nature was a condition that had actually existed in the world and to which people could return. Hobbes thought England, during the Civil War, actually went into a state of nature. Locke thought much of North America, in the seventeenth century, was in a state of nature. So it's a very different exercise in that sense. It's a hypothetical social contract, not an actual one, designed to generate a normative standard, a standard for evaluating institutions as they exist in the real world. That's all. So that's the first thing. You have to have an answer to this there-never-was- a-social-contract critique, and the answer of the modern social contract theorists is, "Yes, we know, but it's still a useful hypothetical exercise." The other problem that anyone rolling out a social contract theory has to address is that nobody really buys natural law arguments in the way that was often hoped that they would. Now, it's important to get clear about this because what we don't want to do is to buy into a rather simplistic, comic strip version of the history of ideas that sometimes shows up in textbooks. The comic strip version goes something like this: well, there used to be natural law theory and that solved the problem of how we get higher values for judging existing institutions, but then along came the decline of natural law and secularism, and we lost belief in natural law, and so now everything is a sea of relativism. Once you start down that path you wind up with Stevenson. You wind up, remember, emotivism and all that? We talked about it in relation to Mill. Once you start down the path of rejecting natural law you're going to wind up with extreme relativism and subjectivism. You're going to wind up with the idea that everybody's moral judgment is just as good as everybody else's. And the problem with natural law theory is it's a throwback to this era when people believed in natural law. And so now people don't, and that's the problem. And so how can you generate a natural law theory when most people don't accept the idea of natural law? That's the cartoon strip story. And the cartoon strip story has one very big problem with it, which you are in a position to know about because of what we've read with respect to Locke. Namely, that there never was agreement on what natural law required. I talked to you about the disagreement between those who favored the timeless universal conception of natural law and those who favored the so-called voluntarist or workmanship theory that Locke embraced. But actually that's only the tip of the iceberg. You go and read a book like Richard Tuck's book on medieval natural law theory and what you'll discover is that in the fifteenth, sixteenth, seventeenth centuries the disagreements that they had within the natural law tradition about politics were just as extensive as the disagreements we have to today. That is to say you could find people defending everything from what we think of today as the far left, to everything that we think of today as the far right within the idiom of a natural law vocabulary. But there wasn't less disagreement. There wasn't less political disagreement. There were huge political disagreements that led to enormous--to civil wars, to religious wars in the sixteenth and seventeenth centuries. So the notion that appealing to natural law is some sort of panacea, and that the problem is that the world we've lost is a world in which there was natural law, doesn't get us anywhere because we know that natural law, whatever its virtues and deficiencies, is not something that can solve political disagreements. As I said, there was a natural law theory that was compatible with every political position you could imagine, and many that you couldn't imagine which existed. If you want examples of that go and read Christopher Hill's book called The World Turned Upside Down, which is a book about ideas and political movements that got nowhere in the seventeenth century, and you'll see that natural law could be lined up with any politics you could possibly imagine. We don't want to say that the problem is that natural law used to constrain and now it doesn't because people agreed; we know that they didn't. Nonetheless, that still leaves the analytical problem that if there are going to be any constraints on what it is that people can agree to, that are not simply the result of what we have found in the world, they must come from somewhere. And if you're going to roll out a natural law theory that doesn't depend upon natural law, you've got to find some other basis for doing it, right? I mean, just to give you one more example to underscore this, we saw that the inequalities among people-- we talked about this some and we're going to talk about it more in connection with John Rawls-- they're a big problem for us because why should some people get more than others just because they're smarter or more hard-working than others, right? If you're a Lockean there's no problem with that proposition because if some people are more able to create wealth than others it must have been part of God's plan, right? But once you secularize workmanship and chop off the idea of natural law and God's plan, and all that, then you get into the sorts of problems we got into when we were dealing with Marx's theory of exploitation. Remember when we introduced the stay-at-home spouse, and the contributions of Sunday school teachers or anybody else to the creation of value we got this over determined set of entitlements that didn't lead us anywhere. So once you secularize workmanship it seems to lead everywhere. If you no longer have the constraint that Locke had, well what we see where must be God's plan because this is how he created it; so what are you going to use instead? That's the question. What are you going to use instead of natural law? And the modern social contract theorists appeal to a nineteenth century German philosopher called Immanuel Kant, eighteenth and nineteenth century thinker who came up with an idea that stands in for natural law, and that is the idea of--wait for it; I told you people don't like to say in words of one syllable what can be said in words of five-- universalizability, universalizability, nine-syllable word. And the basic intuition here is the following: the basic intuition is, if you could choose something from every conceivable standpoint, then it has the force of a moral law. Anyone know what the philosophical term for this was that Kant came up with, anyone who took a philosophy course? Student: Categorical imperative. Professor Ian Shapiro: Categorical imperative. Well, what was the categorical imperative contrasted with? Somebody? Anyone know? No reason that you should. The categorical imperative was contrasted with the hypothetical imperative. And the idea of a hypothetical imperative was if/then. So if you want your society to thrive don't create big government. That's a hypothetical imperative. Categorical imperative is, don't create big government ever under any circumstance. So the idea of a categorical imperative is that it's not dependent on an if/then statement, right? So that's the notion behind Kant's ethics; that we should look for propositions that we would affirm regardless of the consequences. That's the notion of a categorical imperative. Now Rawls goes into this in some detail of what its implications are for contemporary political theory, and we'll talk about that after the break. Nozick just takes it for granted, and one of the reasons I'm starting with Nozick is I don't really want to dig into the question yet of whether and to what extent Kant's ideas are applicable to contemporary politics. Nozick says, "Let's just try them out and see," basically, so one of his slogans is, "Kantianism for people, utilitarianism for animals," right? And the notion here is for human beings this idea of respecting principles that can be affirmed from every point of view, that's going to replace natural law. That's the move that Nozick makes, and it's the move that Rawls makes as well. Does anyone happen to know what the famous categorical imperative was, the example that Kant gave of something that rises to this level of a categorical imperative? It isn't just merely hypothetical. Well, the example he gave was, "Always treat others as ends in themselves, not merely as means to your own ends." The bumper sticker for this is autonomy, "Respect people's autonomy." This is our rights piece of the rights-utility synthesis, right? And Kant's view was that is a proposition everybody would affirm regardless of the empirical consequences, regardless of what effects it has. You'll always say that, no matter who you are, that we should respect the autonomy of others understood in this way. Yes, we all have to use one another to some extent. He doesn't say, "Never use people as means to your own ends"; he says, "Never use people exclusively as means to your own end." So that's his example, and that's what Nozick takes over in his conception of individual liberty which he puts out there right at the beginning of Anarchy, State, and Utopia, when he says, "There is no social entity with a good that undergoes some sacrifice for its own good. There are only individual people, different individual people, with their own individual lives. Using one of these people for the benefit of others, uses him and benefits the others. Nothing more." So there's a strong libertarian stance identified with this Kantian notion of respecting the autonomy of others. We can't use people for some greater good because then we're using them. We're violating their rights, and this is where Nozick is headed. And what he does is disarmingly simple, and easily dismissed, and you make a mistake if you dismiss him because you'll not see what turns on some of his claims that turns out to be very interesting and important. So what he does is, he says, "Let's imagine there was no government." We're not talking about primitive people. We're talking about people like you and me. Let's imagine there was no government. What would happen? What would people do if there was no government? And Nozick is aware that you could make different assumptions about that. Like Hobbes makes the assumption in Leviathan that people will be so afraid of one another that they will try and kill everybody around them. Then you'll get civil war. The fear that others are going to do something to you will be so great that unless you impose an authoritarian state on people they're going to have a civil war, and there's no other possibility. And so by making the state of nature very benevolent Locke thinks he's going a different way, because by making it malign it's easy for Hobbes to justify authoritarianism; the state of nature is so terrible that anyone would put with an authoritarian system rather than the civil war in which life is "solitary, poor, nasty, brutish and short," as Hobbes put it in Leviathan. So Nozick says, "Let's not do that. Let's go with Locke, and let's make the state of nature relatively benign because it'll make our argument more convincing." It'll make our argument more convincing because if you make the state of nature relatively benign it's harder to justify the creation of a state. And we want to make it hard for ourselves to justify the creation of a state because then it'll make our argument more convincing to skeptics, right? So we want to make this intellectual problem as hard for ourselves as we possibly can. Now there's something of an intellectual slight-of-hand going on in this that I'm going to come back to, but I'll just flag for right now. And the way he sets it up, he says, "Let's make it hard to derive the idea of a state." So he says, "Imagine a world in which there's no government." It's a hard thing for us to do. Just to make this real; do you know when police were created? The first police force was created in Britain before here. No reason you should, 1830 Prime Minister Peel created the--the first police were called Peelers, named for the Prime Minister. But imagine, just you think about it. Think about living in a world where there were no police. I mean, it'd be a very, very different world, right? Think about the ways in which just in a day in New Haven, the number of things that go on that are somehow affected by the hovering presence of police. And so Nozick wants you to do that in a more radical way. What would happen if there was no government? Well, it wouldn't be a very efficient circumstance. Even if we don't think that everyone would be out chopping one another to pieces with machetes it still wouldn't be a very efficient situation because everybody would be the enforcer of natural law themselves. This is the Lockean story, as you know. So if I were in here lecturing to you, I'd have to worry maybe somebody's grabbing my car and making off with it. So I would have to find some way to protect my car while I was in here doing this, and maybe I could do it, but it's going to be a very inefficient thing for me to do. And so how am I going to do that? How is life going to go on if people can't even know that their property is going to be protected? Well, they're going to form protective associations. I'm going to say to some colleague, "I'll tell you what. You watch my car on Mondays and Wednesdays while I'm lecturing and I'll watch your car," a sort of block watch kind of idea, right? People are going to form associations to protect themselves and to protect their property because in the absence of that, it's a very inefficient, highly inefficient system. So you would imagine there would basically be like block watch, but even that would be not particularly efficient because the truth is, not only do I not want to watch my car on Mondays and Wednesdays, I don't really want to watch somebody else's car on Tuesdays and Thursdays either, right? So what would happen is we would start to have a division of labor. We would start to have a situation in which we paid somebody to watch cars, and they would be sort of like little militias. They would be businesses that sold protection. And so you could imagine a group of us joining one association that promises to protect cars, but others would join a different association that promised to protect cars. So we'd have the New Haven business that sells car protection, but we'd have the Hamden one, and we'd have the Orange one, and there would be these different businesses selling protection for people's cars. But, and this is one of Nozick's most important analytical claims with application to politics, he says, "The thing about coercive force-- protection--is that it's a natural monopoly." What do you think he might mean by that, anyone? It's bringing an economist category to think about a non-economic topic, but what do you think he might mean? Anyone want to try this? It might seem puzzling. Think about a country in which there are many militias like Lebanon in the 1980s or Russia after the collapse of the Soviet Union where you've got these organized crime syndicates, more or less, or Chicago, for that matter, during the heyday of organized crime. What happens when you have multiple entities selling protection? Yeah, take the microphone. Student: They start fighting with each other. Professor Ian Shapiro: They start fighting with each other, why? You're dead right, but why do they fight with each other? Student: It's like fighting over territory in a way. Professor Ian Shapiro: Territory, what else? Student: If one member gets in a fight with a member from a different group they have to protect that member and so does the other group, so they have no other choice but to fight each other. Professor Ian Shapiro: Exactly right. I mean, you just go and watch a movie like the Godfather, or Goodfellas, or any of those and you know what the dynamic is. I say to you, "You pay me protection money and I'll make sure you're taken care of," but then some bigger thug comes around the next day and says, "You might think Shapiro can protect you, but what do you think that wimp on the Yale faculty can do for you? Forget about it. You're going to pay me, and what's more you're going to pay me much more." And so that's going to go on, and how will it end? Generally what happens? Yeah? Student: > Professor Ian Shapiro: Exactly. In civil wars somebody wins, usually, or else they drag on but eventually somebody wins, right? That's the claim about a natural monopoly force. For coercive force to be a good, it has to be exercised as a monopoly. For coercive force to be a good, it has to be exercised as a monopoly, and that's one claim Nozick wants to make, and the other one, which is just as important is, "It is the only natural monopoly." Coercive force is a natural monopoly, where we unpack that to say, "In order for it be a good, it has to be exercised as a monopoly, and there's no other natural monopoly." This morning when I was coming in, Glenn Beck was complaining. How many of you know who Glenn Beck is? Yeah. Who's Glenn Beck? Somebody want to tell us, who's Glenn Beck? Anybody? Yeah? Student: He's a conservative talk show host and just a social commentator in general. Professor Ian Shapiro: He's a conservative talk show host and social commentator who sees himself as kind of disciplining the Republican Party to keep it close to what he sees as conservative principles that are defined in a strongly libertarian way. And so what was his complaint this morning? His complaint was that Mike Huckabee had Michelle Obama on his show, or did some event with Michelle Obama, agreeing with Michelle Obama that obesity is a big problem in America. And he said, "How dare Mike Huckabee do that? Of course obesity is a big problem in America, but that doesn't mean it's an invitation for the government to get involved in stopping obesity. The government should only do one thing; protect us from the bad guys." In that sense he was--he's probably never heard of Nozick and all of Nozick's argument, but he was embracing this idea that coercive force is the only natural monopoly. So Nozick wants to say, if we go back into our thought experiment, he wants to say, "Because coercive force is a natural monopoly somebody's going to win. They're going to either buy up the other protective associations, or defeat them, wipe them out, or marginalize them, make them irrelevant by being so much more powerful that nobody pays attention to them. So in effect you're then going to have one dominant protective association, right? Once you have one dominant protective association it's something pretty much like a government, but still there are going to be some people who don't recognize it, and these are the people that Nozick calls independents. And Nozick spends a lot of time discussing these people, and you might think, "Why?" There's this very long and difficult to understand chapter on the incorporation of independents into his system. And you will find it confusing to read and you'll think there's something wrong with you that you can't follow it. Actually even though most of what Nozick writes is extremely clear, the confusion is Nozick's, so you shouldn't beat yourself up too much. And I'm going to unpack it in more detail on Wednesday what he's saying and why it matters, but this is the basic intuition: he has to worry about independents for two reasons. One is just to complete his account of how a government would form, but the other one is normative because remember that the social contract argument depends on the idea of consent. Remember we said the utilitarian tradition is maximizing happiness, the Marxian tradition is about ending exploitation, the social contract tradition is about consent as the basis for political legitimacy. So even if the whole story about force being a natural monopoly is correct there are still going to be sort of loony, principled anarchists out there, who don't recognize the legitimacy of the state, you have to worry about. Nozick, writing in 1974, isn't thinking about the sorts of people we're thinking about today in this connection, and so he talks about some anarchist wandering around who wants to protect their own rights, and it has something of a contrived quality to it in his discussion. But think about, when you read "Independent," think Osama bin Laden or Mr. Stack who flew his plane into the IRS building the other day; these are Nozick's independents, these people. They say, "We don't recognize the legitimacy of your state. We don't recognize your protective association. We're not going to accept it." These people are a big problem for Nozick, as I said both as a sort of empirical theoretical matter; he's saying what would happen with these people, but then as a normative matter as well. So what is his point about compensation and all of that? I'm going to dig into that on Wednesday. I'm just going to tell you now, briefly, because we're out of time, what happens. He's going to basically say, "These people are going to be forcibly incorporated into the social order, and the theory of compensation is an account of what makes that forcible incorporation legitimate," and I'll dig into that on Wednesday. We're out of time now, so we'll pick up with that right at the beginning. Cheers.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
24_Democratic_Justice_Theory.txt
|
Prof: We ended last time by talking about the sense in which Schumpeter's competitive democracy is minimalist, and I said that it is in an important sense minimalist. It's linked to this idea of the competitive struggle for power and the notion that turfing the rascals out is the sine qua non of democratic politics. There's one respect in which even if this is a minimal requirement it's nonetheless a very substantial minimal requirement in that the way it has been operationalized as... > Professor Ian Shapiro: So the basic issue was, is this conception of democracy too minimalist? It is minimalist in an important sense in that it reduces democracy to this competitive struggle for the people's vote. I said there's one respect in which it clearly is not minimalist which is that if one's expectation is as Huntington and others would come to argue later that we can't call a country a democracy until there've been two turnovers of power. That means the US was not a democracy until 1840. Japan has only recently become a democracy. Countries like South Africa are not yet democracies or at least we can't say they are because we don't know what would happen if the ANC lost an election, so minimalist not negligible. And I think that for those who complain that Schumpeterian democracy is too minimalist try living in a country that doesn't have it and you will find that minimal is indeed not negligible. Still you might say that raises the question of what it is that we can reasonably expect from democracy, because most people typically expect more from democracy than just the turnover of government. If we go back to the large themes of this course remember that when I introduced the subject of democracy and we looked at Plato and Tocqueville's critiques of it, we really were left wondering, "Well why would anyone think that if protecting individual rights and basing politics on scientific principles of truth is the answer, why would we want to pursue democracy as the goal to achieving that answer?" But if you now take a step back and ask, "Well, compared to what?" democracy starts to look pretty good. Because what we have seen in the literature on democracy is that no other political system does a better job of protecting individual rights. We saw that Madison was greatly concerned with the separation of power and the creation of we might call republican constraints on democracy, but the literature from the twentieth century has established, principally by Bob Dahl but also from many others, that if you look at the addition of judicial review, it's just not something that lawyers like to hear, but if you look at the addition of judicial review to democratic politics it doesn't really add anything. That is to say the way in which empirically rights are best protected is by creating Schumpeterian democracy. Adding judicial review on top of that doesn't seem to make any difference as far as preserving individual rights is concerned. The big thing is to get and keep democracy, not to get and keep judicial review, and I think that is reflected in the fact that separation of powers, at the end of the day, is words upon a parchment whereas the pluralism that guarantees competitive politics is embedded in the society and that is ultimately the guarantor of freedom under democratic conditions. If we think about the truth coming out in politics, finding a political system that as Mill would have it could best track the truth, again what we find when we asked the question "compared to what?" democracy does better than the going alternatives. You can think of Schumpeterian competition as institutionalizing Mill's demand that we have to have competition of ideas. Remember how Mill got from freedom to utility through his idea of the truth coming out of competitive argument, not deliberation, not sitting around and contemplating things together, but people having to argue for their views and defend them on the grounds that they can meet the objections of their critics. And so Schumpeterianism is a kind of institutionalization of the competitive argumentative ideal that Mill talks about in the long chapter on freedom of speech in On Liberty. And so, again, when we say "compared to what?" democracy does better than the going alternatives in preserving the freedom of speech and the competition of ideas that is likely to make the truth come out in the long run. So, despite the fears of Plato and Tocqueville when we pose the question "compared to what?" democracy does better than the going alternatives in vindicating these Enlightenment ideals. As Churchill said, "Democracy is the worst system of government except for the others that have been tried from time to time." Now you could buy everything I've just said now but still feel that this is too minimalist a conception of democracy to meet the sort of expectations that people have when they make demands for the creation of democracy in the real world. And I think there's merit to that objection, and these last two lectures are designed to address it to the extent that I think it's possible to address it. If we think about the conditions under which people demand democracy, they're usually conditions under which people have a strong experience of injustice. So those of us who were around in the 1980s would hear objections from behind the iron curtain to communism. People didn't like communism, and what they demanded was democracy. Or if you go to apartheid South Africa what you find is, again, people find themselves appalled by and rejecting of apartheid, but what do they demand? They demand democracy. Now, if in either of those instances you went to those people, those anti-communists who were demanding democracy in the former Soviet Union or the anti-apartheid activists who were demanding democracy in South Africa, and you said to them, "Well, tell us what a perfect democracy would look like in Russia or in South Africa," they wouldn't have been able to tell you, and I think that that's an important observation. It's an important observation because it captures a feature of human nature that we haven't commented on very much, although it came to the surface in our discussion of MacIntyre, namely that human beings are reactive creatures. They shy away from what does not work and then fumble in the darkness in search of something that works better or at least something that fails less badly. The famous economist, Amartya Sen who I mentioned to you briefly, I think, made this point brilliantly in a new book of his called The Idea of Justice, and it expressed some of his frustration with the academic literature on justice, which seems to get caught up in debating questions that are three points to the right of the decimal without moving on the big questions of justice. And Sen said, "Imagine that you were sitting in a sauna and the controls for the sauna were outside so that you couldn't reach them, and it got hotter, and hotter, and hotter, and you were really, really hot, and you said to the person who had their hand on the controls outside the sauna, 'Turn it down. It's way too hot.' And his reply was 'Well, I'm not going to turn it down until you tell me what the optimal temperature is.'" And, of course, what Sen's point is you don't know what the optimal temperature is. What you know is this is much too hot. And so I think Sen's little story in a more imaginative way than anything I have come up captures this idea that human beings are reactive creatures and we say, "This doesn't work. This doesn't work. This doesn't work," and we're constantly looking for something that does better. And so the fact that anti-communists in the 1980s couldn't describe what a well-functioning post-Soviet democracy would look like, or the fact that anti-apartheid activists in the 1970s and 80s couldn't characterize a democratic South Africa for you in any detail, doesn't detract from the fact that their demands for democracy captured something about what they thought was fundamentally unacceptable about the existing state of affairs. And so people demand democracy because they experience injustice, and they want justice, and they hope that democracy will deliver it. Now some Schumpeterians, Huntington foremost among them, have said, "This is really bad. This is really a bad thing, because what's going to happen when people experience injustice and demand democracy is they are inevitably going to be disappointed." Indeed there was a very interesting poll, I thought, out of South Africa on this very point last year where they found a majority of South African Blacks saying, "Things were better under apartheid than now." But then when asked would they rather go back to apartheid the majority said, "No." It captures, I think, this tension and this paradoxical expectation that people have of democracy. But the Huntingtonian point was, "Well, that you would jeopardize democracy if you get people to load too many expectations onto it." This, to some extent, is defied in the second half of that poll I just mentioned to you because even though people say things were better in the apartheid years they still don't want to go back there. But I think Huntington might say, "Well, eventually things are going to change. Eventually when South African democracy fails to deliver on people's expectations about justice, the regime itself is going to come into jeopardy." If you look at South Africa, just to pursue this example, we've now had four elections since apartheid. South Africa is one of the most unequal countries in the world, has one of the highest Gini coefficients, yet the top marginal tax rate today is lower than it was at the end of apartheid. There hasn't been land reform. There hasn't been significant redistribution of income or wealth. There's been the creation of a small black millionaire class, but for the vast majority of blacks, they're still as badly off as they were before. And so the Huntingtonian thought is that if people load expectations onto democracy that can't be met then when those expectations are frustrated eventually the problem is going to be that people are going to blame democracy rather than the government of the day and turn on democracy when some populous dictator comes along and promises, say as we've seen in Zimbabwe, promises massive land redistribution at the expense of democracy. And so the modern Schumpeterians have tended to say, "We should try and disabuse people of their unrealistic expectations of democracy so that we don't lose at least the minimal benefits of competitive democracy which we've already agreed are not negligible." And so this was presented as a kind of realist realpolitik take on democracy, that people shouldn't expect it to diminish injustice. The problem with the Huntingtonian view is that people are not going to change their expectations because some professor of political science tells them to. There are deep-seated reasons why people turn to democracy when they experience injustice, and are not going to give up on the appeal to democracy in order to remedy it. And those reasons have to do with, I think, with the topic we ended at the very tail end of last Wednesday's lecture, is that democracy is motivated, the impulse for democracy comes from the impulse to resist domination. And there's a connection between democracy and fighting injustice because both of those things are connected with resistance to domination. So if we think that the Schumpeterians are right to say that democracy is often going to fail to deliver on the project of diminishing injustice, but na�ve to think that people are therefore going to stop creating expectations of democracy, that creates a different kind of agenda. That creates the agenda that I want to talk to you about for the rest of today's lecture and Wednesday's lecture and that is; how can we think about promoting justice by democratic means? Given that people are going to have expectations from democracy the better course is to try and find institutions that can deliver on those expectations. And I think it's an important reason not only when we reason about democracy, but also when we reason about justice. Many years ago when I was teaching this course, before I had written Democratic Justice, and indeed one of the events that caused me to write it was a question from a student in the class. I had been teaching Rawls and I had gone through the principles, the difference principle and all of that, and I had explained that Rawls was the most influential political philosopher of his generation, and that this theory of justice had completely revolutionized modern political philosophy. And this student put up their hand and said, "Professor Shapiro, now that Rawls's theory has been established, why hasn't The Constitution been changed to include it?" and many of the students in the class laughed. And they laughed why? Does anyone have a guess? Why do you think people laughed? It seemed like a na�ve question, why though? John Rawls got the answer, so our Constitution doesn't reflect that answer, why haven't they changed it? Why do you think students would have laughed? Yeah? Student: People think there's a disconnect sometimes between political philosophy and actual politics and policy-making. Prof: They think there's a disconnect between political philosophy and actual politics, but why is there a disconnect? I mean, these political philosophers are trying to get the right answer. So let's suppose it's true that Rawls got the right answer, and Nozick didn't, and Dworkin didn't, and Shapiro didn't. Rawls got the right answer. He solved the problem. Why don't we just implement it? This was, after all, what Bentham thought. Bentham thought he'd figured it all out and he went running around the world with his constitutions and was deeply disappointed when countries wouldn't adopt them. Why do we resist this idea? Yeah? At the back, can we get the microphone to the back? Student: Because generally we don't think that claims are absolutely true and we maintain that they can be proved false in the future. Claims are fallible. Prof: So part of it is the fallibilism of the mature Enlightenment. That people somehow resist the idea that anybody's got it perfectly right in the sense of getting a geometric proof. Anything else? Yeah, over here? Student: > Prof: Just hold on a second. We want to record what you say for posterity. Student: The democratic system doesn't change that fast. I mean, part of the nature of the system that we have is that it's slow moving. Prof: It's slow moving, yeah, but still why shouldn't it move? The student might say, "Well yeah, okay, but so they had these ideas in the eighteenth century, now Rawls has better ideas, why shouldn't we move to Rawls's ideas?" I think there's something that will make people resist. There are many people who might concede that Rawls has a better argument than Nozick, or Dworkin, or Shapiro, or any of the other people, Mill that you've been reading, and still want to say it shouldn't be imposed on the society; that somehow principles of justice have to be democratically legitimated in order for us to be forced to live by them. And so I think whether you start from the justice end and you are confronted by this reality that people demand democracy when they experience injustice, or when you start from the democracy end, you realize that people are not going to embrace principles of justice unless they can triumph through democratic institutions, you realize that pursuing democracy and justice together is important. After all, as I said to you when I talked to you about Madison, even though they thought they had designed the best constitution that they could agree to at the time nobody had any illusions that this would be acceptable if it had not been adopted by the people of the state of New York. So having the right answer is not enough. You've got to have the right answer but convince people through democratic mechanisms that you've got the right answer. So democracy and justice have to be pursued together. I said that the animating idea behind democracy is the appeal of resisting domination, but I think the procedural ground rule is what I'm going to call the principle of affected interest. And this was nowhere better articulated than by Nelson Mandela in 1962 in his statement to the court before sentencing. A little piece of relevant background that you may not know, you don't have to write this down, I will put it up on the server. A piece of background is that they had been convicted of treason. The ANC had finally suspended their peaceful opposition about five years earlier and turned to armed struggle. And then a number of ANC leaders had been arrested and tried and convicted of treason. And their attorneys told them that they were going to get the death sentence, and the only way they could possibly avoid the death sentence was to get up and be contrite and beg to be let off. And the young Nelson Mandela said, "No, I'm not going to do that," and he stood up and he made this famous speech in which he said, "I'm charged with inciting people to commit an offence by way of protest against the law, a law which neither I or any of my people had any say in preparing. "But in weighing up the decision as to the sentence which is to be imposed for such an offence, the court must take into account the question of responsibility, whether it is I who is responsible or whether, in fact, a large measure of the responsibility does not lie on the shoulders of the government which promulgated that law, knowing that my people, who constitute the majority of the population of this country, were opposed to that law, and knowing further that every legal means of demonstrating that opposition had been closed to them by prior legislation, and by government administrative action." "We played no role in making the laws that affect us, and we have no means of opposing the laws that affect us," is what he was saying, "and that's why we turned to the armed struggle, and it's not our failing." Of course the government was calling them terrorists. Of course, why wouldn't they call them terrorists? But Mandela's position was that the principle of affected interest had been violated. This notion that I think is very close to the most fundamental procedural idea in democratic theory, that people whose interests are affected by a decision presumptively should have some say in making that decision. If you think about the Boston Tea Party, which the current Tea Partiers are trying to piggyback on the legitimacy of, it was the same notion, no what without representation, no what? Student: Taxation.. Prof: Yes. So it's the same idea that people who are affected by decisions, if you're going to tax us we want to be involved in-- we want to have representation in the decisions about taxation. And so it's trying to capture that idea that I'm talking about when I talk about democratic justice. And I want to describe first a general argument and then some particulars. The first is that this rests on a broad conception of politics. What do I mean by a broad conception of politics? Well, consider this fact. When we talk about--those of you who have read political philosophy in the history of the tradition will know that for most of the great theorists of the past the organization of the political system was only one piece of a theory of politics. Plato, Aristotle, Locke, Mill, all of these thinkers thought that it was important to have a theory of family life, a theory of education, a theory of how the whole society operated. Politics is not just about what goes on in buildings in Washington and in Hartford. It was a broad conception of politics. And one of the criticisms of much contemporary political theory has been that it ignores the broad society. So for example, it was a criticism made of Rawls's theory of justice by feminists that it ignored the structure of the family. Rawls talked about heads of households as the basic representative individuals behind the veil of ignorance. And when he was saying that this was part of the basic-- his theory was a theory of the basic structure of society feminist theorists such as Susan Okin, now deceased, and others made the argument, "How can you say you're talking about the basic structure of society while ignoring the family?" And Rawls eventually conceded the validity of that criticism and came to say toward the end of his life that had he to do it over he would have included the family as part of the basic structure of society. So if politics is about power relations, then presumably you should think about power relations wherever they occur in social life and not simply power relations as they occur in the political system narrowly defined. And so the spirit of my argument for democratic justice is to base it on a broad conception of politics rather than a narrow one. Of course that doesn't mean that every political relationship is a politicized relationship. We might say the family is an intensely political institution, and indeed debates about education, and debates about abortion, and debates about many other subjects have politicized the family in recent times, but in the 1950s it was not a politicized institution. It was seen as something beyond politics, of no relevance to politics. So what institutions are politicized in that sense, consciously conceived of as political, is a separate question from what institutions are in fact political, if we define political as involved in the reproduction of power relationships in the society? So we have a broad conception of politics. And then secondly we have a semi-contextual argument, and this is to try and take account of what people like MacIntyre have argued. That what people are willing to accept is largely conditioned by the social circumstances into which they are born, the traditions that they find structuring their lives, and that we have to take into account the context in which people find themselves. But there's more to life than just the context. That is to say there are inherited traditions and practices, but as we reproduce them into the future we have choices to make, and we need principles to guide those choices. And so the argument of democratic justice is that we do develop general principles of a sort, but they're semi-contextual. That is, they play themselves out differently in different historical circumstances and in different parts of the world. So you may, for example, have an affirmation of the idea of non-domination in family life, but that's going to have to be worked out very differently in America in 2010 than it might have been in America in 1950, not to mention in countries that have inherited polygamist systems of traditional marriage. We're going to have to think in context-sensitive ways about how to realize those general principles in the different circumstances. A third point, and I apologize here I did swear off impenetrable jargon at the beginning of this course and you might think talking about superordinate and subordinate goods is a use of not very user-friendly terminology, but let me explain what I mean by this. If we take a broad conception of politics one of the things that follows from it is that power relations are everywhere. Power infuses everything we do. There are power relations in the family. There are power relations in the workplace. There are power relations in sports teams. There are power relations in classrooms. Power infuses everything. This is, of course, an idea for which the French political commentator, now deceased, by the name of Foucault, is famous for pointing out that power is everywhere, but of course it's as old as the hills. Plato was of the same view that power relations are everywhere, and indeed the reason Plato affirmed a broad conception of politics was because he recognized that power is exercised everywhere in the society, and so if the theory of politics is really a theory of power relations it's going to have to track power wherever it goes. So if power relations are ubiquitous and politics is ubiquitous then it seems like, well, everything is politics. And it's that last phrase that I want to dissent from, that last phrase in Foucault or in Plato that I want to dissent from because, in fact, what we really see is not that everything is power, but that power infuses everything and there's an important difference. Yes, there are power relations in the classroom. Your teaching fellow has a certain kind of power over you. I have a certain kind of power over you, but that's not the only thing that goes on in the classroom. Presumably also what goes on in the classroom is enlightenment, not in the big E sense of the Enlightenment, but enlightenment in the sense of communicating knowledge to you. In the firm, yes there are power relations in the firm. Managers have power over workers. Shareholders have power over managers, at least in certain circumstances. Of course there are power relations in firms, but the exercise of power is not the only thing that goes on in firms. There's the production of goods and services that goes on in firms. Yes, there are power relations in sports teams. Again, coaches have power over players. Donors, you might say, have power over coaches, even university presidents sometimes. So of course there are power relations associated with sports teams, but again, sports teams are not just about power relations they're also about playing sports well. So you can get the point from these examples. We could go everywhere through society and see, yes, social relations often involve power, but that's not all they involve. So to my way of thinking the superordinate good is the playing excellent sport, or producing goods and services, or communicating enlightenment to students. Those are the superordinate goods, or what MacIntyre called the internal goods when he was talking about his practices. Those are the superordinate goods that guide our activities in different walks of life. The subordinate goods have to do with the power relations. And what I want to say is that the goal of a democratic conception of justice should be to democratize the subordinate relations as much as possible while interfering with the superordinate goods as little as possible. You, at the end of the day, want the sports team to be able to play the best football it can play, or you want the students to learn as much as they can possibly learn, or you want the firm to produce as efficiently as possible the goods and services that it produces. So those are the superordinate goods. However, there is this fact about power being mixed up in the pursuit of superordinate goods, and democratic justice is about democratizing the power dimensions of human interaction while interfering with the non-power dimensions as little as possible, and the creative challenge is to find ways to do that. And I think that when we're thinking about conditioning the subordinate dimensions democratically, there are really two dimensions of democratic justice that are both present in that famous quote from Nelson Mandela that I read to you a few moments ago. One is the idea of collective self-government. It's the idea that, as the principle of affected interests intimates, anyone who is affected by a decision should have a presumptive say in the making of that decision. It doesn't necessarily mean everybody has an equal say. We'll get to those issues later, but everybody is presumed to have a say in the making of decision that affect them. No taxation without representation. It's the idea of collective self-government. If we're going to be affected by decisions, we should have a say in making them. But then separate from that and independent of it is the idea of the legitimacy of opposition, the legitimacy of resisting decisions that you don't like, and I think there are two reasons for this. Forget about the presumption against hierarchy for a minute. I'll get to that shortly. Just focus on the idea of institutionalizing opposition. There are two reasons for that. One I've already alluded to today, which is that we're always fumbling in the dark. We're always resisting things that haven't worked well in the past. We're trying to change things. We might be rebuilding the ship at sea, as Devlin says, but we are trying to rebuild the ship. We are trying to make things better as we reproduce them into the future, and unless we have the freedom to oppose the existing order, then the possibility of change becomes illusive. But a second and more fundamental reason that we should institutionalize rights of opposition is that you now know from taking this course that there are no perfect decision rules. We saw that if we just focus on the arms-length types of transactions that characterize national-level politics, politics in buildings in Washington, we did end up with a presumption in favor of majority rule when we worked our way through the difficulties with Buchanan and Tullock, and Brian Barry, and Rae, and all of that. That other things being equal you protect yourself best with majority rule, but it's not a perfect decision rule. You know from Condorcet and Kenneth Arrow that there is no perfect way to aggregate preferences into a social welfare function. So if there's no perfect decision rule, one of the things that follows from that is that whatever the decision rule, whatever the result, there are going to be people who object to it and who object to it legitimately. People are going to feel that their interests have not been taken adequately into account. And so opposition is important for that reason to give people the possibility of trying to get things changed; very important for the stability of democracy as well, because if you don't create avenues for loyal opposition over time you're more likely to get disloyal opposition. If people feel there's no possibility of change they might as well reach for their guns. So there are two dimensions of democratic justice for that reason. There's collective self-government and then this idea of institutionalizing opposition. In practice I suggest that one of the most important ways in which we institutionalize opposition is with a presumption against hierarchy. If you think of the examples I just gave you-- sports teams, classes, firms, you could think of many others, armies, families--they're all hierarchical to a very considerable extent. All those social forms are hierarchical, and often the hierarchy is essential to the realization of the superordinate good in question. So it's not the case that hierarchies are necessarily bad, but it's when hierarchies atrophy into systems of domination that they become bad. Power corrupts and the problem is to prevent those who are higher up in hierarchies from taking advantage of their hierarchical authority in order to dominate others. And so when we confront hierarchical social arrangements there are a number of questions that we can ask, what I'm calling here. We can interrogate hierarchies. We can ask, "Is a hierarchy inevitable?" Well, the hierarchy of a parent over a child is inevitable, but the hierarchy in, say, in the 1950s of a husband over a wife was not inevitable. If the hierarchy is inevitable we're going to have to think about it in one way. If it's not inevitable we're going to think about it in a different way. Is the degree of hierarchy appropriate? Children must be subordinated to their parents, but maybe they don't have to be subordinated for 18 years. We have the arguments of the Children's Rights Movement that wanted to treat children as miniature adults almost from infanthood. So we have to think about, "Is the hierarchy appropriate? Whose interest does the hierarchy serve? Is it really in the interest of the production of the superordinate good?" Think of a boss who has hierarchical authority over a secretary and says to the secretary at some point, "Unless you're willing to go to bed with me, you're not going to get a promotion." Then the hierarchical authority has atrophied into a system of domination because the efficiencies that would be gained from the boss having authority over his secretary have been perverted into something that operates in the interest of the boss, perhaps, but it doesn't serve the hierarchy as it was created. How fluid is a hierarchy? Is it self-liquidating? If you think of the situation where a child becomes a parent, a child becomes an adult, that's a self-liquidating hierarchy. Whereas, if we go back to the nineteenth century and the father turned his daughter over to her husband, that would be a non-self-liquidating hierarchy. Is there vertical mobility within the hierarchy? Think of the debates in the Catholic Church about whether a woman can become a priest. There's not a lot of vertical mobility within that hierarchy. Is the hierarchy symmetrical? We think of the defense of polygamy, but most societies that have polygamy allow a husband to have many wives, but not a wife to have many husbands. Asymmetrical hierarchies are more suspect than symmetrical ones. What are the opportunities for exit? Can people leave hierarchical social situations? You think about polygamy in South Africa, it's essentially elective. People can choose polygamist arrangements, but they don't have to, whereas in some societies in fundamentalist cultures polygamist marriage is enshrined in the legal system. Other things being equal, hierarchical systems are more suspect when the costs of exit are high for the people at the bottom. How insular is the hierarchy? For instance we look at the Amish. It's a withdrawing sect. They don't want to restructure the rest of the social order, so that is presumptively less suspect than a fundamentalist group that does want to restructure the social order. So there are all of these questions one can ask about hierarchical social relationships. You have to ask them in a context sensitive fashion, and then you can get some answers that tell you what we should be trying to pursue in the name of democratic justice. And I will pick up with some of those answers on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
17_Distributive_Justice_and_the_Welfare_State.txt
|
I want to pick up where I left off on Monday speaking about Rawls's two principles of justice. And as you will recall, I mentioned that Rawls really changed the subject with respect to what the metric of justice is. That rather than focus on utility somehow measured, or welfare as it's sometimes called, instead Rawls embraces the resourcist idea of focusing on certain basic resources. The assumption being that no matter what your particular goals in life turn out to be, no matter what your particular life plan turns out to be, and those are not things we know because we're behind the veil of ignorance, you're going to want more rather than less in the way of liberties, more rather than less in the way of opportunities, and more rather than less in the way of income and wealth. Something I didn't mention that'll come into play in today's lecture is that Rawls has to, of course, deal with the fact that the moment you have a theory that affirms more than one value you have to think about, well, what happens when the values conflict? What if maximizing liberties can only come at the expense of opportunities or, if you like, distributing income and wealth in a way that you regard as fair or just conflicts with what you say about the distribution of liberties. Any time you have more than one value you have to deal with the possibility of conflict among them. And he does deal with that. He has an appeal to what he calls a lexical ranking which is short for the more cumbersome term lexicographical ranking. And what that means is that anytime you want more rather than fewer in the way of liberties, you want more than fewer in the way of opportunities, and you want more rather than less in the way of income and wealth, but in anytime there's a conflict something higher in the lexical ranking trumps what's lower. So that if the only way you could get more rather than less income and wealth was to compromise people's liberties, you wouldn't do it. So that's the notion of a lexical ranking. You want to maximize each item in the lexical ranking subject to the constraint that it does not come at the expense of maximizing something that's higher in that lexical ranking. Okay, and then we talked about his first principle, and I gave you the illustration of religious freedom as the kind of thing he's thinking about when he talks about distributing all liberties in a way that gives people the most extensive possible freedom compatible with the like freedom for all. And this is not to be confused with the idea of neutrality. We worked through that. And so we gave the example of whether to have if you were comparing, say, a fundamentalist regime with a regime that has a disestablished church, the reason for preferring the regime with the disestablished church is that the most disadvantaged person in that regime, namely say the fundamentalist, is less disadvantaged than the person who does not affirm the established fundamentalist beliefs of a fundamentalist regime. So you always compare the least--and I know it's a cumbersome way of putting it but there's not an elegant way of putting it. What you want to do is compare the condition of the most adversely affected person in each situation and say, "Which would you rather be?" basically, and you're going to always pick the one that minimizes the harm to the least advantaged person. So the standpoint of justice is the standpoint of the least advantaged person, but this isn't a bleeding heart point, but rather a self-interested point because you don't know who you are behind the veil of ignorance. Okay, now, let's talk about his second principle which is, in fact, divided into two principles, so he really has three principles. The first part of it he says, "Social and economic inequalities are to be arranged so that they are both attached to offices and positions open to all under conditions of fair equality of opportunity." That is, you'll see 2b. That's not a typo. I'll come to 2a in a minute. For some reason, known only to John Rawls, he put 2a before 2b, but he meant to put 2b before 2a in the sense that 2b is lexically prior to 2a. So that's why I'm doing 2b first. And that 2b is what governs the distribution of opportunities. And he's essentially saying "fair equality of opportunity." What does that mean? It means no apartheid. It means, for instance, we still today have a system in America where occupation-by-occupation women earn about eighty-six percent of what men earn in exactly the same occupation. So there's gender discrimination in remuneration for employment, so we would say, "Those systems are illegitimate." Systems which reward women less than men on a systematic basis wouldn't be chosen. You would never choose a system that privileges one gender because you don't know whether you're going to turn out to be the women or the men. You would never accept the system of job reservation such as apartheid because you don't know whether you're going to be black or white, and not knowing you always look from the standpoint of the most adversely affected person, and so you would say no to apartheid. You would say no to a system which privileges one gender over the other. And then you can see, I think, how their lexical ranking would come into play because let's suppose you have a status quo in which, as I said, women on average earn eighty-six percent of what men earn in the same professions, and somebody comes along and says, "Well, so we need an affirmative action program to remedy that." Then the question would be, but does the affirmative action program conflict with anything protected by the first principle? And those opposed to it would say, "Yes," and those in favor of it would say, "No," and that's what you would be arguing about. Because it might be the case that if the only way in which you could achieve affirmative action actually interfered with the liberties protected by the first principle then you would say, "Even though it's necessary from the standpoint of the second principle, we won't do it." And if we had more time we could have gone into the New Haven firefighter's case, and maybe we can do some of this in section, that the Supreme Court dealt with last summer where essentially they said some version of the fact that the affirmative action program to achieve promotions in the New Haven Fire Department interfered with basic freedoms that Rawls would have put under the first principle. And of course the other side made the opposite claim. But that is essentially how it would be argued about. So I think that the principal of fair equality of opportunity is relatively straightforward in its own terms. The animating thought is that not knowing who you're going to be, you would never agree to a system that systematically disprivileges some group for fear that you're going to turn out to be in that group, okay? So it's relatively straightforward. But now I want to come to 2a, which is probably the argument in Rawls's book that's attracted the most attention, and that is that--it's actually third in his lexical ranking, and that is the claim that income and wealth is to be distributed "to the greatest benefit of the least advantaged individual." To the greatest benefit of the least advantaged individual. This is not a principle that Rawls invented. It's an old principle of welfare economics which used to go under the label maximin, not maximum, maximin, m-a-x-i-m-i-n, short for maximize the minimum share, maximin, maximize the minimum share. Rawls calls it the difference principal, but it's the same idea as maximizing the minimum share. And the intuition behind the difference principle is exactly the same intuition that we've been talking about by reference to the general conception of distributive justice, which is, remember, distribute all goods equally unless an unequal distribution works to everybody's advantage. And you get from everybody's advantage to focusing on the condition of the worst off. Why? Because of this argument that, well, if you're the worst-off person and you can affirm something then everybody else will affirm it as well. If you'll choose it when you're the most adversely affected you'll also choose it if you're the second, or third, or fourth, or fifth most adversely affected person. Now, there's actually a complexity when you start to think about the distribution of income and wealth that has not come up in the consideration of the other two principles, which I'll just mention, and then say a couple of things about, and then move on and we'll come back to it later. And that is, well, but what if there was a principle that gave a very small benefit to the person at the bottom, but at a huge cost to the middle class? Would you choose it, because what are the odds that you're going to turn out to be the person at the very bottom? And we think about this for a variety of reasons. It might be a trickle down argument, or Bentham's claim that the rich will burn their crops before giving them to the poor, or some other argument. But if you could achieve a very minor increment to the condition of the person at the bottom at a huge cost to the middle class, you wouldn't necessarily want to do that. And so Rawls has two points to make about that, neither of which is entirely satisfying. The one is his argument about grave risks, and it works like this. It's the claim that, well, one of the things we know, and this is a perfectly uncontroversial claim, one of the things we know is that even when there's moderate scarcity that doesn't mean there won't be some people who are in grave danger. That is to say there's no necessary relationship between the level of economic development in a country and the distribution of income and wealth. So you can have a wealthy country, but there still can be extremely poor people in it. That's true. We can have bag ladies living out of lockers in Grand Central Station, at least when they used to have lockers in Grand Central Station, which they don't anymore, but let's not deal with that particular piece. So there's no necessary relationship between the level of economic development and the distribution of income and wealth in a society. Therefore, you have to assume that even if there's relative scarcity you might turn out to be the person who's starving. You might turn out to be that bag lady. And even if the probability of being that person is low, the costs of being that person are high. I don't know if you remember the argument Rumsfeld made in his counterterrorism strategy, the so-called one percent solution, and this was that even if there's a one percent probability that we're going to be hit by a certain kind of terrorist attack we should treat it as a hundred percent probability because the costs of being hit are so high. So the probability of the event may be low, but if you turn out to be that person you're going to starve to death, so this is Rawls's assumption about grave risks. So all of that's plausible enough. The reason I say it's not entirely satisfying is if you really took the grave risks idea seriously why in the world would you make this third in your lexical ranking, because after all what good is freedom of speech or freedom of religion to somebody who's on the verge of starvation. So it's not entirely satisfying in that sense that if it justifies saying, well, we will protect the person at the bottom even though the probability of that person turns out to be low because of the grave risks assumption, why then--this is very annoying--why then would we make it third? But it's not really a deep criticism of Rawls in that you could just say, "Well, we should have reorder his lexical ranking and put this higher up in it." But anyway, that's not entirely satisfying. And then the second thing he says that's not entirely satisfying is he says he's sort of sensitive to this problem that you might get absurdities out of this because if it's very costly to help the person at the bottom in terms of what other people have to give up maybe people wouldn't be that impressed by the grave risks assumption. So he throws in this idea of chain connection and he says, "Well, even though my argument doesn't depend on this, I think it's true." When somebody does that you know there's some slight-of-hand going on. And he basically says, "Well, if you help the person at the bottom that will have some kind of chain reaction. It'll help the person at the next level, and that'll help the person at the next level, and that'll help the person at the next level," so it's a kind of Keynesian idea that if you stimulate the man at the bottom there'll be multiplier effects throughout the whole system that'll make everybody else better off too. Well, that may or may not be true, and it also, by the way, I think makes the disagreement between Rawlsianism and utilitarianism much less interesting because then anything that Rawls would choose a utilitarian would choose as well. And we really want to look at when they pull in opposite directions if we want to see what's at stake between them. But so this chain connection idea I think is just sort of-- he throws it in there to make his argument look more appealing on consequentialist grounds, but actually there's (a) no reason to believe it's true, and (b) if it were, then what's really at stake between Rawls and utilitarianism becomes much less interesting because by satisfying Rawls we're also going to be satisfying utilitarianism. So I think the best thing about chain connection is to ignore it, so I'm not going to say anything more. But I will say this in Rawls's defense on this point, which is, a lot of people who criticize Rawls create-- and I even did some of this myself what I think is unfair in retrospect-- people create examples where helping the person at the bottom comes at a huge cost to others and it looks rather implausible. But one thing we should say about Rawls is, he's not trying to give policy advice for every marginal choice. At one point he says in the book, "I'm thinking about the basic structure of society, the basic institutions." So he's not saying--I mean, the example people sometimes give is the Reagan tax cuts in the 1980s, or actually the Bush tax cut in the 2000s. But the Reagan one had this structure more explicitly where there was a very big tax cut for the wealthy, a tiny tax cut for the people at the very bottom, and a huge increase in middle class taxes. Basically that was the structure of it. And people said, "So Rawls would prefer this." And the answer is he's not trying to make a recommendation at the level of the next incremental policy choice, he's trying to say what the underlying institution should be structured as. And so he would resist saying, "Well, this shows my theory is silly, or my theory doesn't generate conclusions that I want it to." He's not a policy wonk. He's thinking about constitutional principles, basic principles; the basic structure of society. And indeed, I'll just make one footnote to that footnote which is if you start at the front of A Theory of Justice and you really plow through all of it, you get to about page 300 and something and he says words to the effect that his theory is agnostic between capitalism and socialism. And he took a lot of abuse for that in the 1970s and 1980s. People said, "Wow, you mean I plowed through 300 pages of a book about justice only to be told it's agnostic between capitalism and socialism? Give me a break!" But in defense of Rawls on that point he would say, "Look, what economic system actually operates in the interest of the least advantaged? That's an empirical question of political economy, trial and error and so on. That is not a question for political philosophy to settle. I don't know whether it's capitalism, socialism, or some version of a mixed economy, that works to the greatest benefit of the least advantaged person. That's for the policy makers and political economists to figure out. What I'm telling you is what the standard should be." And that is a good argument on Rawls' part. And he's saying, "This is what the standard should be. The standard should be that whatever system you have works to the greatest benefit of the least advantaged player when compared with other systems." And, yes, before the experience of centrally planned economies people may have thought some version of state socialism would do that. After a half a century of experience with it, doesn't look so good. So we'll go back to some kind of market system, and after decades of experience with unregulated markets or minimally regulated markets, and we discover the cost of those for the people at the bottom maybe we'll end up with something else. "So it's not a failing of my, John Rawls's, theory that I don't tell you what kind of economic system to have. My aspiration is to tell you what the normative criterion is that it should meet." So I think that's the most important takeaway point. Now let's give you a picture for those who like pictures, and for those who don't we will explain it in words. This is going back to our Pareto style of diagram. Let's suppose that's the status quo. And now we've got primary goods, in this case income and wealth for two people; A up here and B along here. And that is the status quo. So A has more than B. Rawls' difference principle, or the so-called maximin principle of welfare economics, says, "Drop a line down to there, (that point is perfect equality, right) and then go east, and everything in this shaded area is what we might call Rawls superior to the status quo." So it's a kind of L-shaped indifference curve. It goes down though the status quo to equality and then it turns right. Anyone want to take a stab at telling us why? Why would you have these L shaped indifference curves? Yeah? Why don't you get the mic, or come to the mic? Student: You can move right as far as possible and that will be increasing the goods for B, and then as you move down, you have to stop at the quantity... Prof: Say A has this much when we start out. Why isn't this point here that I'm lining up, say, Rawls preferred? Student: Because then A becomes the least advantaged person and they have less than B had before. Prof: Exactly right. You got it. So the reason we head east or turn right at the point of equality is what are we trying to do? We're maximizing the minimum share. We're saying the person--all we want to say is that whoever turns out to be at the bottom has the highest possible share, so if we went from X to down here then we would have changed who's at the bottom, and that's not important. What would be important is that this bottom share would be smaller and we wouldn't want that. So we don't care who gets it because we don't know whether we're A or B. That's not material. We don't know when the veil of ignorance turns out to be lifted, we don't know whether we're going to be A or B. So we're just going to assume we're going to be whoever turns out to be the worst off. So this distance here represents the minimum and you wouldn't want it to get smaller, basically. So if we moved anywhere in this area here the minimum share would get bigger. So if we went to Y then we could do a new L-shaped indifference curve--why is it doing this? Much later, why isn't there a "restart much later" button? Okay, so that's the basic idea. You just get keep getting these L-shaped indifference curves. Now I want to say something about what a radical idea this is in a philosophical sense. It's not necessarily that radical in a distributive sense for the reason I've already indicated to you. It could be compatible with trickle-down if we took the view that trickle-down works better than any other system from the point of view of the least advantaged. Let's put this Rawls, Bentham and Pareto compared. This is a little taking something of a liberty because we've got different things on the axes, so it's sort of a little ultimately not coherent, but I think you can still get an insight out of it. If we start with that status quo we know what's Pareto-preferred, so everything that's Pareto-preferred is also Rawls-preferred. So if it turned out to be true that the best way to help the person at the bottom is to have only the market transactions then we would do it, but if it turned out that there were other ways that were Pareto-undecidable, like these, to help the person at the bottom we would do that. So it's not necessarily radical in a distributive sense. You could get very egalitarian radical redistribution, but you could also get--you could get no redistribution. You could get the Pareto system if that turned out to be the way in which the most disadvantaged person is helped the most. But it is radical in a philosophical sense that I think is captured by the observation that we don't care whether we turn out to be A or whether we turn out to be B, and that is the following. There's been a huge debate in our lifetimes over whether the differences between us are the result of nature or nurture, an enormous debate. You read a book like Charles Murray, Losing Ground. How many people have heard of that book? Nobody? Wow, so how quickly things change. Well, it was a book that came out about probably twenty years ago, that's probably why you haven't read it, basically saying that the differences between us are genetically determined. There are genetic differences in IQ that show up in various ways including racially, and there was a huge storm of criticism. He was accused of being a racist, and there were charges and countercharges, and people said, "No, it's not genetics, it's environment," and so on. So one of the most important things-- and I'm going to focus on this much more next Monday, I just want to mention it now so you can think about it-- one of the points Rawls makes is, "Look, possibly the differences between us are genetic. If the differences between us are genetic it's just moral luck because you didn't choose to have the genes that you have, and not only didn't you choose it, you didn't do anything to get the genes that you've got. It's moral luck. On the other hand, suppose differences between us are environmental? Well, it's moral luck. You didn't choose to be born in the country and the family you were born into. You didn't make any choices in that regard. Furthermore, you didn't do any work to be in the country or the family you happened to have been raised in. Again, it's just luck. From your point of view it's a completely random thing. You could have been born somewhere else to somebody else, or you could have been born to parents who didn't have the resources that your parents have. So this whole debate about nature and nurture," says John Rawls, "is beside the point. From the standpoint of justice we don't care." And that is his argument. I think, for what it's worth, the most important argument in Rawls' book that the differences between us are morally arbitrary whether it's nature or nurture. It doesn't matter. They're not the result of choice, and they're not the result of work. They just fell out of the sky as far as we're actually concerned. That being the case, and I'm going to go into the assumptions behind that in more detail on Monday, but from the point of view of this discussion so we don't really care if A or B is the worse-off person. We're just going to say from the standpoint of justice we want to improve the lot of the worst-off person, and even if the worst-off person changes. So we go from X to G. It's morally irrelevant. All we want to do is maximize the share of the person at the bottom. So that's the Rawlsian difference principle. And as I said, you can see it overlaps with and contains the Pareto Principle. And it has some overlap with Bentham in that it would sanction moving into the Pareto-undecidable zone here that would be Bentham preferred if it works for the greatest benefit of the least advantaged person. And Rawls' claim to you, the reader, is that this is the principle you would choose. You would want the economic system that works to the benefit of the person at the bottom. What do you think? Who likes this idea? Who doesn't like it? What don't you like about it? Who was--it was here, yeah. Student: It assumes that once you're born you're going to stay in that position for the rest of your life. There's nothing you can do about it. Prof: Okay, well that's a good observation. I'm not entirely sure what you're saying. Just explain a little bit more and I'll see if you're saying what I think you're saying. Student: Well, what about effort that people in to changing their social position? Prof: Ah, what about effort? Okay, what about effort? I thought you were making another point, so let me just respond to the point I thought you were making, which you weren't making, but we should nonetheless address since people do sometimes make it. But then I'll come to your point which is, anyway, much more interesting. The point I thought you were making is this has no dynamic side to it. That is to say it's static in exactly the way the Pareto Principle is static, but any economist would want a theory that has a dynamic dimension to it. You would want to know over time, what's the effect of a certain redistributive change? So we would want to say, "Well, if benefiting the person at the bottom slightly improves their welfare in the next three months, but it comes at the cost of lower economic growth over time, would we want to do that?" And it's fair to say Rawls doesn't have an answer to that question. He doesn't have a dynamic theory. On the other hand I think his defense, this is why it's ultimately not a very interesting criticism, I think his defense would kick in, that, "Well, I'm telling you what the criterion should be, not how to run the economy." But let's come to the point about effort. And this is going to, to some degree, get us into next Monday's lecture, but it's good to make a start at it because it's a very deep point, actually. What about effort? So yes, the capacities we have might be distributed in morally arbitrary ways, but some people choose to work really hard and some people choose to sit on the couch and watch ESPN all day. And let's suppose you have two people with exactly the same IQ, but one watches ESPN all day and one studies hard, so the one who studies hard gets the A, and the one who watches ESPN all day gets a C, and I take the import of what you're saying, "Well, there's some legitimate dessert there. The person who works should get the A, yeah?" Now Rawls is sort of with you, but in a way that I don't think works for him because if you read Rawls carefully what he says is exactly what you've said. He says, "Yes, the differences between us are morally arbitrary, but the use we choose to make of our capacities is not." Why doesn't it work for him? This is sort of like Bentham being scared of the egalitarian implications of his theory and so he wheels out the difference between absolute and practical equality, but it doesn't really work for him either for reasons we saw. Why doesn't this really work for Rawls? Yeah? Student: Couldn't you say that someone's naturally, just by luck, given a capacity or a predilection to work hard? Prof: So that's exactly where I was hoping you would go. That, well, some people have a supercharged work ethic and some people don't. And why do some people have a supercharged work ethic? Because of the way they were raised, perhaps? Maybe some of it's genetic, perhaps? But why isn't that morally arbitrary as well, if the differences in IQ are morally arbitrary? So weakness of the will, you know, morally arbitrary too, or strength of the will is morally arbitrary. So the person who sits on the couch watching ESPN all day just doesn't have the same-- he doesn't have the moral luck to have a lot of partisan work ethic, so he shouldn't be penalized for that. So now you can see why Rawls doesn't want to go there because it has the effect of completely obliterating the concept of any personal responsibility, ultimately. Because once you make that move, why should you differentiate between the weakness of the will or the strength of the will and say, "That's not morally arbitrary, but differences in IQ are morally arbitrary, or athletic ability are morally arbitrary"? It doesn't seem to work. So it's not a satisfying way out for Rawls, and he does it because he's afraid of the radical implications of this view. But what's interesting about this, you know, Rawls' fix doesn't work, but his underlying arguments are very powerful arguments. I mean, isn't it right? Isn't it just true that the differences between us nature or nurture are morally arbitrary. It is moral luck whether it's genetics or upbringing. Nothing you did, nothing you chose, nothing you have, therefore, any particular right to. So you guys think you all worked so hard to get into Yale and all this and you deserve to be here. It's a load of bunk. None of you deserve to be here more than anybody else. That's what he's saying. It might be a nice fiction you tell yourself. As this little exchange showed, his attempt to put some limits on this idea is just pathetic. It doesn't work. But the basic argument about moral arbitrariness it's totally compelling. Anyone here think it's not compelling? And I don't think it's a good argument. I don't think it's compelling because it has implications I don't want to live with. I mean, you have to have some other reason. Let me say this, I think it has implications that if you really drill down into them, probably nobody in this room wants to live with, just like John Rawls didn't want to live with them. But what's a good way out? Who thinks I'm wrong? Who thinks this is a bad argument? It's just not a good argument. Nobody? Hmm. Yeah? Student: Not that I disagree, but it seems to strip down human free will in that if the only condition which matters is the circumstances of your birth then you don't really have any choice as to the course of your life. So it seems completely deterministic which might not be... Prof: Well, yes and no. I think it's agnostic on the question of free will. He's not saying we don't have free will. Maybe we do. Maybe we don't. I think what he's saying is, if some of us have a greater capacity, say, to work hard or to engage in delayed gratification than others, that is a difference between us. Just as an empirical matter, that is a difference between us. What he's saying is that the person who has the greater capacity for deferred gratification or the greater capacity to work hard isn't entitled to more benefits than the person who doesn't have it, just in virtue of that strength of the will. I mean it might also be true that we don't have free will, that's another matter, but I think he's just not taking a position on that question. You could do it two by two and fill in all the boxes. He's not saying we don't have will, we can't make choices, he's just saying the choices that we make don't give us any particular rights. Now, I mean, I think what is, and maybe what you're getting at that does pin the tail on the donkey is, what he's saying is ultimately subversive of the idea of individual responsibility, but that's not the same thing as determinism. They come together in other settings, so if somebody says, "Well, I committed the murder, but I was in the grip of a schizophrenic disorder, and so I didn't have free will, so I'm not responsible," that's when determinism and the issue of the will come together, but he's not making that kind of argument. He's conceding, I think, for the purposes of discussion, that there is free will. But I'm saying, when you take away his fix, which really I don't think does work, you're saying the differences that flow from our strengths of will shouldn't entitle us to anything in particular. So none of you deserve all the good things you've gotten in life just because you worked hard. So what if you worked hard? You had the capacity to work hard. Other people didn't. So anyone think it's just not a good argument? Anyone think it seems like a good argument but you really don't like it, at least some? Who really likes it, the people who want to go and watch ESPN all day? There are philosophers who follow this intuition. There's a guy called Philippe Van Parijs, a Belgian political thinker. Yeah, what were you going to say? Student: Is this an argument in favor of complete equality, then, in terms of redistribution? Profession Ian Shapiro: Okay, so that's a good question. I'll leave Philippe out of it because your question's more important than what he has to say. Is this an argument for equality? Rawls's answer is a qualified yes. He's saying it's not an argument for equality. It's an argument for the difference principle. He's saying it's an argument for distributing things in such a way that they benefit the person at the bottom. Now you have to have a whole theory of how the political economy works to say whether redistribution to absolute equality would do that because if redistribution to equality would destroy incentives, let's say, and so that over time this would go this way, then it wouldn't be. So he would say, "What it's an argument for is detaching what we get from any theory that ours is some kind of moral right and connecting it to some theory, the best going theory of the day about how you organize an economy to benefit the person at the bottom. That's what you should do. If equality does that you have equality. If the market does that you have the market, but it's not anything else. It's just this pure consequentialist claim. Do it in order to help the person at the bottom." So from Parijs's point, Philippe Van Parijs's point he says, "Well,"-- he wrote a book called Real Freedom for All and he said, "Yes, everybody should get a minimum basic income, and it shouldn't have anything to do with their work, their capacity to work." So in the famous one-liner he says, "Even surfers should get pay." Even surfers should get pay, as Van Parijs puts it. There should be the highest sustainable universal basic income, whatever that is, and it shouldn't be connected to work because capacity to work is morally arbitrary. So what I'm going to talk about on Monday is we're going to really dig into this question because you can now see, I mean, one of the ironies I want you to mull over between now and then, one of the ironies is that this puts Rawls way to the left of Marx in a certain sense because as we saw Marx was a straight-up Enlightenment theorist wedded to the whole workmanship idea. Remember Locke, workmanship, labor theory of value; all that stuff? Marx's critique of capitalism was the worker doesn't get what he produces. Rawls is saying, "We don't care in any moral sense. We don't care who did the work because the capacity for work isn't a capacity that brings with it any particular moral valence because of this moral arbitrariness argument." Okay, we will pick up from there next week.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
12_The_Marxian_Failure_and_Legacy.txt
|
Prof: Okay, this morning we're going to finish talking about Marx, and we're going to focus on the failures of Marxism and the legacies of Marxism, and the failures are connected to the legacies in important ways that we'll unfold as we proceed with today's discussion. I began this already on Monday by talking about the difficulties with Marx's assumptions about scarcity and superabundance. And we saw that just as Mill couldn't banish politics from politics through the mechanism of a neutral definition of harm, Marx is unable to banish politics from politics by somehow wishing distributive conflict could, in principle, go away. We saw that that notion of superabundance that informs his communist utopia is incoherent in principle, and that means that distributive conflict is endemic to human society no matter how much wealth there is. And some principles for the distribution of income and wealth are going to have to be argued about and defended regardless of what is produced in society, or what could be produced in society. The second failure of Marxism is well known, but we should, nonetheless, mention it, which is that his historical predictions turned out to be hopelessly wide of the mark. Not only was he wrong in 1830 and 1848 when he thought that communist revolutions were about to sweep through Europe. They were both (a) not communist and (b) quickly reversed, in any event, within a couple of years. His larger historical predictions were also wrong. He thought communism would come, socialist revolutions, for reasons that you now know because we've worked through the macro theory. He thought that socialist revolutions would occur in advanced capitalist systems that had become uncompetitive because of the replacement of competitive capitalism with monopoly capitalism, and in fact where we saw revolutions bearing the communist label was in peasant societies: in Russia, and in China, or in Eastern Europe in countries where it was actually more or less imposed from the outside by the Soviet Union after World War II. We didn't really see any society go through the sort of path Marx was thinking of in his larger teleological theory of history, namely from feudalism, to capitalism, to socialism, to communism. It simply didn't happen. And as the reversals of the revolutions of 1830 and 1848 remind us on a smaller scale, Marx's bigger idea that there's some purpose or direction to history seems questionable by the first decades of the twenty-first century. History doesn't go in a single direction, and this is a theme to which we will return, but you can see movements toward more egalitarian systems and then movements away from egalitarian systems. You can see democracy created and then you can see it collapse into authoritarianism, so that there isn't a single teleological or directional focus of history of the sort that Marx was looking for. So on the big predictions, Marxism doesn't look very good from the vantage point of the twenty-first century. But some of his smaller predictions were also wrong in ways that in some respect is more interesting for our purposes, and so I'm going to go back through some of his arguments and focus on things that were wrong with those arguments that we can, nonetheless, draw some interesting conclusions about as we go on our way in examining the moral foundations of politics. And I'm going to start with his macro theory and talk about some difficulties with that, and then I'm going to go backwards into the micro theory. So we're doing the reverse, if you like, of what we did on the way in. We started with the micro theory and we saw how that generated the macro theory. Now we're going backwards through the macro theory, and then we will go back into the labor theory of value and the assumptions about workmanship that underlie it that he took over from Locke and secularized and modernized, as you know. So if you'll recall from Monday's lecture, Marx's macro theory was an invisible hand theory like Smith's, except it was a malevolent invisible hand whereas Smith's was a benign invisible hand. And one element of it was the argument about the potential for liquidity crises, that there would be the possibility that people would horde money, that money would stop flowing through the system, and the system would thereby become sclerotic. It is the case that capitalist systems have the potential for liquidity crises, but one of the things Marx greatly underestimated was the capacity of capitalist states in capitalist societies to address things like liquidity crises and other things as well, we'll see. It's almost as if he didn't really take it seriously when he and Engels said, in The Communist Manifesto, that the state in capitalist society is the executive committee of the bourgeoisie. He underestimated what the state could actually do to preserve capitalism. A good example, in the early years of this system was a huge liquidity crisis in Mexico where the whole Mexican economy was on the verge of complete collapse, but the western governments, led by the United States, put together a fifty billion dollar package to pump liquidity into the Mexican economy until the crisis was over, and they succeeded. And so we didn't see the kind of collapse in the Mexican economy that that liquidity crisis had the potential to create. So the argument about the potential for liquidity crises is valid, but we have no particular reason to think they can't be managed once the sources of liquidity crises are understood, and governments have the levers available to them that were made available to the Mexican government during the Clinton Administration in the U.S. Secondly, Marx has this argument about the declining tendency in the rate of profit. As I said to you, every classical political economist believed that there was a declining tendency in the rate of profit in market systems, and they thought one of their jobs was to account for it. They thought it was definitely the case that there is this phenomenon, and you had to explain why it occurs. In fact, if we look over the long course of capitalism since the nineteenth century, it's far from clear that there is, as an empirical matter, a long-term declining tendency in the rate of profit. In Marx's case, as you know, his story about the declining tendency and the rate of profit had to do with the increasing capital intensity of production. That as capitalists compete by relying more and more on what he calls constant capital, there's less and less fresh value created, and so what capitalist entrepreneurs do at the margin to be more profitable in the medium run makes them less profitable. So the capitalist who puts the spinning jenny in increases his profits in the short run, but when there are spinning jennies in every cotton factory in the economy the rate of profit is lower than before the first spinning jenny had been put in. And Marx identified that as the basic dynamic driving the declining tendency in the rate of profit. Two problems with it; one is that it assumes there's this sort of finite number of industries. Because you could grant his argument and say, "Yeah, it's true that the rate of profit in the cotton industry will fall as it becomes more and more capital intensive," but there are all kinds of new industries that are going to come into being all the time. So capital will flow eventually into other industries and they'll start out with big profit margins, and then the profit margins will get competed away, but then capital will so somewhere else. So unless you have this idea that there's a sort of fixed number of lines or production such that once profits start to fall in all of them they're going to fall economy-wide, there's no particular reason, even working from Marx's own premises, to think that he's come up with any lasting account of why the rate of profit would fall in capitalist systems. Moreover, even putting that problem to one side, if you think about it, even if it is the case that making production more capital-intensive reduces profit margins in the long run, nonetheless, if productivity goes up at a more rapid rate than capital displaces labor in the production process then you wouldn't necessarily see the rate of profit fall. So if the rate of increase in productivity exceeds the rate at which capital is displacing labor, or to put it in Marx's jargon, constant capital is displacing variable capital, then there's actually no reason to expect the rate of profit to fall. So it all depends on how much more productive capital makes labor, and there's no theoretical answer to that question. It's an empirical question. So even though Marx thought there was a declining tendency in the rate of profit it's debatable whether in fact there is, and it's certainly not the case that his theory explained why it should occur. His third argument, competition eliminates competitors, and because production is becoming more and more capital intensive, entry costs are going up and you're not going to see new people coming in to the market. I told the story about NASA and the O-rings on the Challenger blowing up. There was nobody to come into the market and take over then, so you have a monopoly, a not-very-efficient monopoly producer who lacks the incentive to innovate, and we have the sorts of problems we saw that developed with Morton Thiokol in that instance. So that's also true only under a fairly restrictive set of assumptions which might not turn out to be true. After all, there are some industries in which there are economies of smallness. Think about Apple Computers invented in somebody's garage in California completely upending IBM and the big established capital-intensive computer firms, and totally transforming that market. So it's not necessarily true. Marx had in his mind nineteenth-century industrial production, steel works, cotton factories, this sort of thing, but it's a very historically-bounded perception of what it is that goes on in economic systems. And there might be all sorts of sectors in the economy that tend to resist the development of monopolies, and in which there are various economies of smallness that actually lend themselves to the constant entry of new players that keep revitalizing the system. And I think the information technology revolution that has accompanied your generation would be exhibit A in defending that proposition. Under-consumption, or sometimes it's said over-production, obviously it means the same thing. The way we put it into Marx's sort of conceptual scheme was the argument that the workers collectively couldn't buy everything they produced. Smith believed that. Ricardo believed that. Marx believed that, and early theories of imperialism were partly dreamed up in order to explain that; that you get imperialism partly as a result of the search for new markets to address the endemic weak demand in capitalist systems. Marx is not the only person who thought there was endemic weak demand in capitalist systems. Indeed, Keynes, the great English economist of the first half of the twentieth century, and the theorist whose ideas informed the policies to end the Depression also thought there was endemic weak demand in capitalist systems because of a diminishing marginal propensity to consume. Keynes' idea was if you have no money and I give you a dollar you'll spend it, but if you have a million dollars and I give you a dollar you'll save it. And so when you get into recessions the problem is weak demand, and so you get the Keynesian answer to recessions or depressions is to spend money at the bottom, for governments to do that largely through borrowing. And this is, of course, what first, the George W. Bush Administration in its final year, and then the Obama Administration in its first year, have been doing. A classic Keynesian response to what's now being called The Great Recession to prevent it from becoming a great depression. Namely, the state borrows money, tries to spend it in the part of the economy that will stimulate demand and then will get the economy going back up. So, again, this is an example of Marx's chronic underestimation of the capacity of the state to do things that will stave off crises, or manage crises, or prevent them from becoming catastrophically bad. And so I think we've seen a vivid illustration of that in the last couple of years. Finally, working class consciousness; as all of these other things were going on and making the system creak at the joints and become less and less functional, Marx thought that the workers would start to become a class-for-itself. The workers would start to see that they were getting ripped off and get angry about it, become mobilized and militant, and reach the point where they believe that they had nothing to lose but their chains. Now there are two problems with that. One we already mentioned last Wednesday. This is that Marx was half-right in thinking that people judge their utility by what others get. They don't just ask the Reagan question, "Am I better off than I was four years ago," they do pay attention to what others get, but they generally, and this is the part where he was wrong and a hundred years of industrial sociology has now pretty much established this, they tend to compare themselves to similarly-situated people. So workers in the auto industry compare themselves to steel workers or coal workers, not to the executives who run the firms in which they actually work, and that's true up and down the occupational scale. I think I mentioned a professor would be much more upset to learn that their salary is five thousand dollars less than the professor in the next office than they will be to learn that their salary is five hundred thousand dollars less than the attorney who lives down the street. People compare themselves to others, but to similarly-situated others, not to people very far from them in the socioeconomic order. And so that kind of militancy doesn't eventuate. More importantly, Marx actually conflates the relative immiseration of the proletariat with the absolute immiseration of the proletariat. So if we went back through the slides and we went back to the discussion of the theory of exploitation, remember when we did that little exercise and we saw that you would all actually agree to be more exploited on his definition than less exploited when you had the choice of either going to a ten-hours working day with new technology or an eleven-hours working day without it. But that measure was a relative measure. It was what you get as a proportion of the total surplus as compared with what the capitalist gets. It wasn't an absolute measure, and we saw that it's perfectly possible for the rate of exploitation, as he defines it, to go up while the level of wages remains constant or even increases. So wages might be going up as well as exploitation at the same time. Well, but if that's true you're never going to reach absolute immiseration. You're never going to actually reach the point where workers are literally falling into poverty. And if it's the absolute immiseration that has to trigger the militant action, the working class consciousness, it's not going to happen. And indeed, here again Marx greatly underestimated what governments can do to make sure that the workers don't reach the point where they have nothing to lose but their chains. In the 1950s, an English Marxian political economist called Ralph Miliband, whose son now is a British cabinet member, by the way, David Miliband, and is the likely next leader of the Labor Party after they lose the elections in May to David Cameron. David Miliband's father, Ralph, wrote a book called The State in Capitalist Society in the 1950s-- I think it was 1954 but don't hold me to it-- in which he said, "The welfare state is capitalism's best friend." The right always attacks the welfare state, but it's capitalism's best friend because it buys off working class discontent, and it ensures that workers have a stake in the existing order, and that they will never reach this preverbal situation where they have nothing to lose but their chains. So when you work your way through this macro theory it's riddled with holes and doesn't add up to the collapse of capitalism and not, therefore, particularly surprising that capitalism didn't collapse in the ways that Marx predicted. But now let's dig in a little bit more to the micro theory, and the micro foundations of Marx's thinking, because I think this is where we will find some interesting lessons for our own project in this course going forward. At the heart of the micro theory is the labor theory of value, and we're going to say something about three aspects of it. We're going to talk about Marx's assumptions about living human labor, the moral argument behind the labor theory of value, which many people you will find in this room find appealing despite the problems with the labor theory of value, and then some alternative formulations of what it is that he was trying to do. If you think back to when we were doing the exposition of the labor theory of value I said that Marx defends it, living human labor-power as a source of value, by saying it's the only thing that creates fresh surplus value, okay? So one difficulty is, and John Roemer points this out in that piece that I had you read, it ignores the contribution of dead workers. That living workers, for instance, when we talked about the introduction of machinery, the spinning jenny or whatever, what about the workers who made the spinning jenny? Aren't they part of this calculation? Aren't they being exploited either by the capitalist or by the living workers who use the spinning jenny? So if you took it seriously you'd have to say those exploitation indexes are way too simple because they don't capture the contribution of dead workers. But then a second thing it doesn't capture, the labor theory value assumes the capitalist contributes what, nothing? But why isn't it the case that the labor that the capitalist performs also goes into the creation of the surplus? And again, I think you've got to imagine, get yourself back into this nineteenth-century mindset where intellectual work doesn't seem particularly important. You're just running these big factories. But, of course, when we think about what the role of entrepreneurial ideas is in the creation of productive systems, it's absurd to say that the work of the capitalist doesn't contribute anything to the value of what's produced. But then you get into the problem, well, how do you discover what is the result of the work of the worker and what is the result of the work of capitalist, and there's no mechanism for dealing with that. And then what about the fact that the worker in Marx's typical model has a spouse at home feeding him, making his sandwiches as he goes to the factory and so on and so forth. What about the spouse at home? Why is it just the worker who's being exploited? And various theorists have said if you take Marxism seriously on its own premises, it seems that, again, just as with the dead worker, the stay-at-home spouse is either being exploited indirectly by the capitalist or is being exploited by the worker who goes to work for whom she is doing unpaid labor. And so we've had a series of feminist critiques of the labor theory of value. And this gets recognized in daily life. There is a very interesting 1986 divorce case in the State of New York in which a couple had gotten married, he had gone to medical school, she had stayed at home and darned his socks and made sandwiches for him, and helped him through medical school. Anyway, many years later they get divorced. And apart from the usual issues in the divorce, the court said that the stay-at-home wife had a property interest in her husband's medical practice that was a byproduct of the work that she had put into it by darning his socks and making his sandwiches, and he went to class and built up his practice. And so they awarded her a forty percent property interest in the practice and required him to maintain life insurance in her name for the rest of his life so that her property interest could be protected. So what you can see once you start to do this, you know, I've just given three areas where this runs into trouble, once you take seriously the idea that labor-power, the capacity to work, is the source of value, why zero-in in this monomaniacal way on what one worker does in the production process? What about all of the others who contribute to the productivity of the process either directly as with the capitalist or indirectly? And of course once you make this point about the stay-at-home spouse, what about the Sunday school teacher that drummed the work ethic into the worker? Didn't that Sunday school teacher contribute something to the productivity of the worker and so on? So you're going to get this huge web of indecipherable entangled entitlements if you're trying to trace out who contributed what work to the creation of something of value. More fundamentally let's dig into the assumption that making creates entitlements. After all, for Marx this is what seems to give the theory its ideological edge. As I said, it's a secular version of Locke's workmanship. It's the claim that workers produce something for which they're not compensated. They produce something and the capitalist takes it, right? That's the claim. And when we were doing the exposition of that I said that what differentiates labor-power from all other commodities is that it's necessary for the production of every commodity. It's the common denominator of all commodities. Remember I gave you the example; I said if I have a certain amount of money and I spend it on a meal, I consume the meal and it's gone, whereas if I spend it paying somebody to paint my house, at the end of having consumed their labor-power I have a more valuable house. If I go to sell the house I'm going to get more from it than I would have gotten but for not having had it painted. And so that's the idea that the consumption of living human labor-power leads to the creation of fresh exchange value, whereas mere consumption of a meal doesn't. Terrible argument. What's wrong with it? Nobody can see what's wrong with it? Come on. It's a terrible argument, hopelessly bad argument, anybody? Yes, ma'am? Student: Could the meal's value be that it keeps you alive? Professor Ian Shapiro: Say a little more. You're dead right. Student: So the meal, even though you can only use it once, it is necessary for the sustained value of yourself. Professor Ian Shapiro: Okay, and just take that thought a little further. Where does it go? You're dead right. Student: So the value is constant, or something? Professor Ian Shapiro: Well, not exactly, but the point is it's just wrong to say when I consume that food it goes, because after all I have more calories of energy which I could then use to paint my own house with, let's say. Of course I might sit on the couch and watch the Super Bowl and just get fat, but that's my own decision, right? I can use that energy to paint the house. So a very interesting Cambridge economist called Piero Sraffa wrote a book called The Production of Commodities by Means of Commodities. Even though it's about an 80-page book, it took him thirty years to write. That's a whole different story which I don't have time to go into. But this is what he said that's of interest to us. He said, "Imagine an economy that has three commodities, corn, books, food, I'm sorry, corn, books, labor. Corn is food; corn, books and labor. Yes, it's true that it takes labor to produce corn and to produce books. Yes it's true that it does not take books to produce corn or labor, but it's not true that it doesn't require corn to produce labor and books. So corn, or food, or anything that's necessary for the production of labor-power is going to have the same property that labor-power has. It's going to be present necessarily, directly or indirectly, in all lines of production. And so Sraffa said, based on that idea, you can do a corn theory of value for this economy that will have exactly the same mathematical properties as the labor theory of value, and then you can have your theory of the exploitation of corn by capital, the rate of exploitation of corn by capital that will be exactly analogous to the rate of the exploitation of labor by capital. There's no difference, mathematically identical. Why is that interesting to us? It's interesting to us because it shows you that whatever Marx says about exploitation merely being a technical notion, it's not. It's a moral idea. Colloquially we talk about the exploitation of natural resources, but we don't say that the corn owns what was produced with the corn. We don't say the corn was exploited in the way the worker was exploited, we just don't. Think about an intermediate case; the horse down a mineshaft in a coalmine hauling trucks of coal from the face to the elevator that's going to take it out. We could do the whole Marxian story. We could say the horse covers the cost of its feed in the first hour of the day, and it works ten-hour days, so the other nine hours it's producing surplus that accrues to somebody other than the horse. Is the horse exploited? Yes? No? Nobody thinks the horse isn't exploited. So a much trickier case, and the reason it's tricky is some people are much more open to the idea of animal rights than others, right? So Locke said animals were the waste of God put there for our use, but we're not that hard-hearted, some of us. So if you think of the horse as in some sense a kind of moral agent then it's not a happy thing that it's being exploited in this sense. But whether or not we want to say the exploitation is unjust-- not just cruel if the horse is suffering, but unjust--depends upon some prior idea that creatures are entitled to the product of their labor. That's the workmanship idea. And that is why even if you reject the labor theory of value, you're still left with this nagging idea that there's something to workmanship. Most people are not going to want to totally get rid of workmanship. If I write a book I think, "I put all that blood, sweat, and tears into that book, it's mine. It's mine, my work." Somebody takes it, "You've stolen something that I made." It's a very powerful thing in people. But not everybody accepts it, right? Not everybody accepts this idea, this very individualist-centric conception that we own what we make. We have this kind of exclusive--to go back to the language of Locke in The First Treatise, that God made us with the capacity to be miniature gods, to have the same ownership rights over what we make as he has over his creation. Look at Chief Seattle. He has a very different view. "This we know: the earth does not belong to man, man belongs to the earth. All things are connected like the blood that unites us all. Man did not weave the web of life, he is merely a strand in it. Whatever he does to the web, he does to himself." Very different view of the world and our place in it; not man-centric. One of the reasons we start this course with Locke is this is where the individualism comes from. It's this workmanship ideal. Doesn't look individualist at all in Locke's formulation because at the end of the day, it's a theological argument; it's an argument about God having maker's rights over his whole creation. But it's the move of saying we are miniature gods that can behave in a god-like fashion, and then secularizing it, which leads to the individualism. Or consider something that Robert Nozick, a writer we're going to read later in the class, pretty soon, actually, starting next week-- he makes fun of the labor theory of value. He says, Why does mixing one's labor with something make one the owner of it? Perhaps because one owns one's labor (self-ownership, if you like), and so one comes to own a previously unowned thing that becomes permeated with what one owns. Ownership seeps over into the rest. But why isn't mixing what I own with what I don't own a way of losing what I own rather than a way of gaining what I don't? If I own a can of tomato juice and spill it into the sea, so that its molecules (made radioactive, so I can check this) mingle evenly throughout the sea, do I thereby come to own the sea, or have I foolishly dissipated my tomato juice? Why is it that we want to say when I put effort and energy into something that it's mine? Very, very perplexing and tricky thing and we will come back to it in considerably more detail later in the course when we read John Rawls, because interestingly it's not until we get to John Rawls that we find anyone who's really willing to radically question workmanship and the self-ownership postulate that goes with it. Okay, so Marxism seems problematic, so what's left? And I think there are really two main things left. One is a negative argument that comes out of Marxism, rather than any of its positive claims, and that is the fact that his theory of exploitation fails doesn't mean that he lacks a good critique of markets as distributors of either good or harms in society. We could go back to our story about Trump and the bag lady, for example. The Pareto superior result in that example, if you remember, was for the bag lady to die because there was no Pareto superior exchange that could occur between Trump and the bag lady. Markets reflect the inequalities that come into them, but they don't give any account of the justice of those inequalities. So there's this problem, if you like, of the tyranny of the status quo, and we saw that in the economic realm with the analysis of that problem, and in the political realm when we talked about the corn dealer problem with Mill, right? That the market systems are biased towards the status quo, but market principles are purely procedural principles. They tell you nothing about how you got the status quo, and so there's a kind of garbage-in/garbage-out problem with market systems. Nozick, we will see, makes this point explicit when he says any remotely plausible theory of justice is going to have to have three parts, a theory of justice in acquisition (i.e. a theory of staring points), a theory of justice in transfer (i.e. a theory of exchanges), and a theory of the rectification of past injustice (i.e. a theory for addressing accumulated injustices of the past). And Nozick will, for reasons which we will go into, thinks he has accounts of all of those things. But it's the negative argument of Marxism, that markets are not justified by reference to the failure of Marx's own alternative theory, is good. If you're going to have a justification for the distribution that occurs through markets it's going to have to be something else. It's going to have to be provided. Maybe it can be provided, but so far in the utilitarian and neoclassical traditions it hasn't been provided. So there's a kind of unfinished agenda there that's put on the table by the Marxian critique of markets even though Marx's answer to that problem, is unconvincing. And then I think secondly what Marxism leaves us to address is not an argument about the sources of value. The labor theory of value is just a hopeless analytical mess, and can't be resuscitated. There's no way to fix the labor theory of value. But there is an argument about freedom. Remember where we started; I said unlike the conventional wisdom that Marx is an egalitarian, no. He's a theorist of freedom. His utopian ideal is a world in which the free development of each is the condition for the free development of all, where our labor in not alienated. We saw that the utopian version of that is unsustainable. Nonetheless, Marx's definition of class, when you think about it, is really an argument about freedom. When he says you're working class if you have to sell your labor power to somebody else in order to live, it's the compulsion, right? It's the compulsion. It's not whether you choose to. I like lecturing here having a captive audience of people who have to listen to me ramble on endlessly. It makes me feel good, but that's not what makes me working class, right? It's that I have to work for somebody else in order to survive. It's that element of compulsion. That's your lack of freedom. So as Roemer puts it in the essay that I put on the syllabus, it's that there's a class monopoly of the means of production which creates that power; that some people have the power to insist that other people work for them. So it's really not about the calculus of contributions and who puts what, and whether the rate of exploitation's two point three-three, or one point six or anything. It's not about that. It's really about the distribution of power in the production process. And if we're going to take anything from Marx that is of enduring value it's going to be an argument about power. It's going to be an argument which says that a world in which we organize things so that one class effectively has power to control a different class's behavior, that is an unjust world. And much of the neo-Marxist literature that people take seriously jettisons the labor theory of value and explores this power-based argument as the root of what it was Marx was trying to get at with the concept of exploitation. See you on Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
22_Democracy_and_Majority_Rule_I.txt
|
Prof: Where we ended with last Wednesday was the tail end of our discussion of the anti-Enlightenment. And I think some of you discussed in section the difficulties that arise when you think about the complete rejection of the Enlightenment project. For instance, I think a very dramatic example that fixes our intuitions very quickly is the example that I know some of you discussed which is, that in the 1950s in the United States there was no such thing as marital rape. You can remember this from our discussion of Mill. The wife was the chattel of the husband and her legal identity was suspended for the course of the marriage. Not only could a husband not be prosecuted for raping his wife, he could not be prosecuted for assault. He could not be prosecuted for doing all kinds of things to her that if he did it to some unrelated person on the street would land him in jail or worse. Now if you think about MacIntyre's discussion of practices, in the 1950s this was an accepted norm of the prevailing practice, and the argument that this should be seen as unjust or unacceptable wouldn't get any purchase from an ethic that was based on the idea that we must accept inherited traditions, and norms, and practices. And so the one important takeaway point from that is, however difficult the Enlightenment ideas of individual rights and trying to make objective statements about what goes on within systems of human association is, however difficult that Enlightenment project is, giving up on it completely presents even more insuperable problems, because very few people are really going to want to go all the way with abandoning the idea that the individual should be subordinated to community norms and practices, and abandoning the notion that we can appeal only to tradition in thinking about whether or not traditions as we've inherited them are acceptable. And so, enter democracy, the last section of the course in which we're going to talk about a tradition which I think does a better job than any that we have considered hitherto in delivering on the promise of the mature Enlightenment, the promise to recognize individual rights as the most important normative ideal, and to base politics on some commitment to objective knowledge about human society that goes beyond the beliefs, commitments and practices of whoever happens to be around and whatever values they happen to have. Now one thing about democracy that distinguishes it from all of the traditions we've considered thus far is that it's a tradition that was made famous by its critics. If you think about the ones we've considered thus far; the social contract was made famous by Hobbes and Locke. Utilitarianism was made famous by Bentham. Marxism obviously made famous by Marx. These were all ways of looking at the world that had their champions, and it was the champions that made the case for why we should behave in accordance with their dictates. And with the anti-Enlightenment as well, it was Burke as the big champion of the anti-Enlightenment. And then we looked at some modern anti-Enlightenment thinkers. Democracy was made famous by its critics. Who do you think this is? Any guesses? Who's that on the left, that gent on the left? Nobody know? Nobody want to guess? Who's that? Student: Aristotle? Prof: Close. Who? Student: Plato. Prof: Okay, who's the gent on the right? Anyone? First one to get it gets a free book. Yeah? Student: Tocqueville? Prof: You got it. Come and see me later you'll get your free book. These are both people who were not champions of democracy. They were worried about the potential that democracy has for tyranny. Let's listen to Plato. I don't know if you can read that, but I'll read it to you. And don't try and write it down. I will put it up on the server. Plato has two very famous analogies in The Republic which sum up his appalling disdain for democracy. He says, "Imagine then a fleet or a ship in which there is a captain who is taller and stronger than any of the crew, but he is a little deaf and has a similar infirmity in sight, and his knowledge of navigation is not much better (kind of dopey old captain). The sailors are quarrelling with one another about the steering-- everyone is of opinion that he has a right to steer, though he has never learned the art of navigation and cannot tell who taught him or when he learned, and will further assert that it cannot be taught, and they are ready to cut in pieces anyone who says the contrary. They throng about the captain, begging and praying him to commit the helm to them; and if at any time they do not prevail, but others are preferred to them, they kill the others or throw them overboard, and having first chained up the noble captain's senses with drink or some narcotic drug, they mutiny and take possession of the ship and make free with the stores; thus, eating and drinking, they proceed on their voyage in such manner as might be expected of them. Him who is their partisan and cleverly aids them in their plot for getting the ship out of the captain's hands into their own whether by force or persuasion, they compliment with the name of sailor, pilot, able seaman, and abuse the other sort of man, whom they call a good-for-nothing; but that the true pilot must pay attention to the year and seasons and sky and stars and winds, and whatever else belongs to his art, if he intends to be really qualified for the command of a ship." "Now in vessels which are in a state of mutiny and by sailors who are mutineers, how will the true pilot be regarded? Will he not be called by them a prater, a star-gazer, a good-for-nothing?" And then Plato's other famous analogy, which looks at the masses rather than how they would manipulate the government he says, " Suppose a man was in charge of a large and powerful animal, and made a study of its moods and wants; he would learn when to approach and handle it, when and why it was especially savage or gentle, what the different noises it made meant, and what tone of voice to use to soothe or annoy it. All this he might learn by long experience an familiarity, and then call it a science, and reduce it to a system and set up to teach it. But he would not really know which of the creature's tastes and desires was admirable or shameful, good or bad, right or wrong; he would simply use the terms on the basis of its reactions, calling what pleased it good what annoyed it bad." So Plato didn't have much regard for democracy. Not surprisingly, because in 399 BC the democracy in Athens had executed his hero and teacher Socrates precisely for pointing out the sorts of lack of knowledge that he is alluding to here both in the ship's captain analogy and playing to the mob sentiments that he's alluding to with this analogy of the people in a democratic system as being a powerful animal. And so Plato was very unimpressed with democracy as a potential system of rule. He thought it would quickly collapse into tyranny. The phrase "the tyranny of the majority," though, was made popular by Alexis de Tocqueville. You've already run into this when we talked about Mill's harm principle. Tocqueville was a nineteenth-century French aristocrat who went to America to try and understand how American democracy worked because it seemed to him less destructive of freedom and individual liberties than what was coming down the pike in Europe. He thought the French aristocracy and the French monarchy was way too shortsighted in not seeing that they had to understand the egalitarian trends of modern history and to create institutions that could lasso them, and control them, and domesticate them. So people think of Tocqueville often as a great defender of democracy, and in a certain sense he was, but he was also a critic of democracy, and I'll go into that in a little bit more detail shortly, but his main reason for thinking that American democracy was a relatively good system was its propensity to limit egalitarian impulses that he thought were breaking out all over Europe. And so Tocqueville's summation of his fear of the tyranny of the majority comes in Democracy in America. He says, "When I see that the right and the means of absolute command are conferred on a people or upon a king, upon an aristocracy or a democracy, a monarchy or a republic, I recognize the germ of tyranny, and I journey onward to a land of more hopeful institutions. "In my opinion the main evil of the present democratic institutions of the United States does not arise, as is often asserted in Europe, from their weakness, but from their overpowering strength; and I am not so much alarmed at the excessive liberty which reigns in that country as at the very inadequate securities which exist against tyranny. "When an individual or a party is wronged in the United States, to whom can he apply for redress? If to public opinion, public opinion constitutes the majority; if to the legislature, it represents the majority, and implicitly obeys its injunctions; if to the executive power, it is appointed by the majority, and remains a passive tool in its hands; the public troops consist of the majority under arms; the jury is the majority invested with the right of hearing judicial cases; and in certain States even the judges are elected by the majority. However iniquitous or absurd the evil of which you complain may be, you must submit to it as well as you can." So democracy brings with it this problem of the tyranny of the majority and it was, after all, we saw earlier in this course, against that that John Stuart Mill erected his harm principle. He too had this great fear of tyranny of the majority. And so, as I said, democracy was made famous first by its critics, and they point out that it has this propensity to pander to mass opinion without regard to whether it's true or false, and to ride roughshod over the individual rights, again, without regard to whether this produces domination or worse. So that being the case, it seems like prima facie not very encouraging to think that democracy can deliver on the Enlightenment where we're talking about politics that's based on science and politics that respects individual rights. And so you might, given reading Mill as you have, given what the little bit of Plato and Rousseau that I've just shown to you, and your own thinking about politics as you confront it day to day, you might well be skeptical of the proposition that democracy is going to deliver very well on the mature Enlightenment. And so just how that might be the case is what's going to concern us for this and then the next three lectures. And we're going to start our consideration of democracy with The Federalist Papers. The Federalist Papers are probably, along with Rawls, the two most important pieces of political theory ever to come out of America. They were a series of articles published in the newspapers of New York State, as I'm sure most of you know, in order to try and help get the Constitution ratified. Although authored by these three gentlemen, James Madison, Alexander Hamilton and John Jay, they were signed with the pseudonym Publius, which was short for Publius Valerius Publicola who was reputed to be the consul who had restored the Roman Republic in 509 BC; whether he in fact did is a matter that's debated by historians. The Constitution, by the time they wrote these the vast majority of these papers, these letters to the people of New York, had already been approved by the majority of the thirteen states, but few people believed that it would survive if it wasn't adopted in New York. The Confederate Constitution had required unanimity for the adoption of the Constitution, but in the actual Constitution itself they had said, "Well, nine out of the thirteen states will be enough," but nobody really thought it could survive if it was not adopted in New York. And so the Federalists took upon themselves the task, Hamilton, Madison and Jay, of persuading the people of New York to support the adoption of The Constitution and indeed they were successful in that endeavor. But they had the fear of this problem of majority tyranny that I've already alluded to with respect to Plato, and Mill, and Tocqueville, front and center in their considerations. And you shouldn't be surprised that they would worry about that, because if you imagine yourself back into the eighteenth century thinking about democracy the ancient ideal of democracy, the ancient Athenian ideal of democracy had basically been a notion of ruling and being ruled in turn. So if you have an academic department, let's say. Let's set aside the issue of untenured faculty. Just imagine a department of tenured faculty. We have a circulating chair. Somebody's chair for three years, then somebody else, then somebody else, that's the notion of ruling and being ruled in turn, and the reason you can do that is that everybody basically has the same interest. You don't have to worry about monitoring or controlling the current ruler, because basically they have the same interest as you do, and they're not, therefore, going to do anything with the collective that you wouldn't do yourself. So that's the ancient ideal of democracy as ruling and being ruled in turn. It obviously, in the case of Ancient Greece it excluded women, it excluded slaves, so it was a certain truncated conception of a democratic community about which people have discoursed at great length. But given that the model was the idea that you could have ruling and being ruled in turn without loss because everyone was assumed, basically, to have the same interest. Now as soon as you get a diversity of interests that mechanism of government goes off the table as a way of doing things because then you might have subgroups who see that they-- let's suppose you introduce non-tenured faculty. They're going to have a very different interest than the tenured faculty. And so the idea of ruling and being ruled in turn wouldn't work anymore. Once you have a serious division of interest within the polity you have this problem. And this is the problem Madison articulates in Federalist No. 10 when he says, "By a faction, I understand a number of citizens, whether amounting to a majority or a minority of the whole, who are united and actuated by some common impulse of passion, or of interest, adverse to the rights of other citizens, or to the permanent and aggregate interests of the community." So once you have factions you have this problem. If you're the majority faction it doesn't matter to you, but it does if you're the minority faction because things are going to happen that you don't like. So the first thing Madison says to himself is, "Well, could we get rid of factions?" and if you read Federalist No. 10 carefully you'll see that he thinks that the costs of doing that would be so great in terms of lost human freedom and the kind of repression you would have to engage in, it would look like the French Revolution or the Russian Revolution which, of course, he didn't know about, but which he didn't know about either of those things, but he wouldn't have been surprised by them when he wrote this. So you can't get rid of factions. "The causes of factions can't be removed, and therefore you have to look at their effects." He says, "The latent causes of faction are thus sown in the nature of man; and we see them everywhere brought into different degrees of activity, according to the different circumstances of civil society. So strong is this propensity of mankind to fall into mutual animosities, that where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts. But the most common and durable source of factions has been the various and unequal distribution of property. Those who hold and those who are without property have ever formed distinct interests in society. Those who are creditors, and those who are debtors, fall under a like discrimination. A landed interest, a manufacturing interest, a mercantile interest, a moneyed interest, with many lesser interests, grow up of necessity in civilized nations, and divide them into different classes, actuated by different sentiments and views. The regulation of these various and interfering interests forms the principal task of modern legislation, and involves the spirit of party and faction in the necessary and ordinary operations of the government." Managing factions so that they don't destroy the common interest is the basic business of politics. Now how do you do that? Well, how do you manage the effects of faction? He goes on and he says, " The smaller the society, the fewer probably will be the distinct parties and interests composing it; the fewer the distinct parties and interests, the more frequently will a majority be found of the same party; and the smaller the number of individuals composing a majority, and the smaller the compass within which they are placed, the more easily will they concert and execute their plans of oppression. Extend the sphere, and you take in a greater variety of parties and interests; you make it less probable that a majority of the whole will have a common motive to invade the rights of other citizens; or if such a common motive exists, it will be more difficult for all who feel it to discover their own strength, and to act in unison with each other. Besides other impediments, it may be remarked that, where there is a consciousness of unjust or dishonorable purposes, communication is always checked by distrust in proportion to the number whose concurrence is necessary." "Hence, it clearly appears, that the same advantage which a republic has over a democracy, in controlling the effects of faction, is enjoyed by a large over a small republic-- is enjoyed by the Union over the States composing it. " I will talk in a little while about what advantages Madison thinks a republic has over a democracy, but right now I want to focus on this other point which is that a large republic has an advantage over a small republic. A society that has no factions at all presents no problems for democratic theory as we have already talked about in connection with the stylized idea of ancient Greece as one in which everybody has the same interests, and you can have ruling and being ruled in turn. What Madison is most afraid of is a single faction, a majority faction, because if you have a society in which there's a majority that agrees all the time or most of the time, the problem is what about these people? They're going to be on the losing end. So if you have a society in which race, and class, and religion all tend to fall along the same division these people are either going to have to knuckle down to tyranny or they're going to reach for their guns. They'll overthrow the system if they can; and if they cannot, they might as well become criminals. They're not going to have an incentive to participate in the democratic order. And so from Madison's point of view this is the ideal society. We think modern Poland ninety-nine percent Catholic or something like that, but even there, there are other divisions. So even there it doesn't work. But if you're going to have factions the worst thing to have is one or a small number. What you really want is lots of factions, crosscutting cleavages as we talk about them in modern political science. The notion here is if you're in a majority on one question, but you know you might be in a different majority or in the minority on the next question then you have both the reason to temper your own behavior when you're in the majority and not tyrannize over the minority, and if you're in the minority you have reasons to accept your loss this time around. As people always say, when their team loses the World Series, "There's always next year, there's always next time." So next time you might be part of some different coalition and you might be able to prevail. So the best we could have would be a world with no factions as all, but that's unrealistic in modern times, particularly once you have an economic division of labor, class divisions, as Tocqueville would point out forty years later. And so then what you need is lots of factions, and that's why Madison argued you need a big republic because the larger the republic, the more likely it is that you're going to have multiple crosscutting cleavages, and then the system--we sometimes define democracy as institutionalized uncertainty of outcomes. You don't know. You don't know what the majority's going to decide. We don't know if--let's suppose President Obama nominates Justice Wood for the Court. We don't know what the outcome will be so everybody will participate and lobby and so on to try and affect the result. People will talk about coalitions. A more dramatic case, I think, when George Bush Senior nominated Clarence Thomas to the Supreme Court. It split the African American community right down the middle because there were people who were largely democrats who also wanted an African American justice on the court, a good example of crosscutting bases of political affiliation. And so the Madisonian idea is that that's actually a good thing. This institutionalized uncertainty of outcomes prevents the tyrannizing of some groups over others and gives everybody an incentive to remain committed to the process. And that is what would subsequently become called the pluralist theory of democracy associated most famously in the twentieth century with Robert Dahl who was on the Yale faculty for decades and decades. And he's still--he's about to turn, actually, 95 years old this year and he lives in New Haven. He's probably the most famous democratic theorist of the second half of the twentieth century. The pluralist theory of democracy builds on this Madisonian idea that what you want is institutionalized uncertainty of outcomes that is a byproduct of crosscutting cleavages, multiple crosscutting cleavages. So that present winners have reason to limit the amount in which they tyrannize over losers, and present losers have incentives to remain committed to the process for the future. There's always next year. You want people to believe that in order to keep them committed. But that wasn't enough for Madison. That was the basis of his argument for an extended union, but it wasn't enough for Madison. He was worried, just as Tocqueville articulated it in the nineteenth century, that the institutional structure could facilitate tyranny. He says, "But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition." Maybe the most famous line Madison ever wrote. Ambition must be made to counteract ambition. "The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government." Another famous line: "If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions." So your basic guarantor is the crosscutting cleavages among the people, an extended republic, but that's not enough because-- and this is really a view that most famously associated with the nineteenth-century liberal British thinker Lord Acton when he said, "Power corrupts, and absolute power corrupts absolutely." The notion is that you really have to divide up power. You have to have an institutional scheme-- this is his auxiliary precautions--an institutional scheme which causes people in the different branches of government to check one another. And so we get what Madison called a non-tyrannical republic rather than a democracy. And in that sense Madison is a critic of democracy just like Tocqueville, and just like Plato. They want to say that pure democracy leads to tyranny and we have to check it with an institutional scheme that limits what can be done. And so the American system so called checks and balances, which you all learned about in high school civics courses, structures the constitutional system that they created. This is something that you can tell from the passage from Tocqueville that I read you that Tocqueville didn't fully understand or appreciate because he thought that every branch of the government was ultimately controlled by the majority. And the designers of The Constitution intended, rather, to have them at least controlled in different ways by majority opinion, but more importantly to set up a scheme in which they would control one another. Ambition would be made to counteract ambition. And so we've all been reading in the newspapers this weekend at the moment of the retirement of Justice John Paul Stevens that he was one of the people, for instance, who checked the expansion of executive power after 9/11 in the Hamdan case by saying it was unconstitutional to use military tribunals to try the Guantanamo Bay detainees. From the point of view of Madison the specific issue is not what's important here, but rather that what we saw after 9/11 was this big expansion of executive power, and then you see an assertion of power by a different branch, say to the executive, "No, you can't do that," and so a limiting of executive power in this case by the judicial branch. And so what you have in the American scheme, the American constitutional scheme, is a large number of veto points. First of all we have The Bill of Rights. The Bill of Rights was enacted after The Constitution, but only when undertakings had been made that it would be enacted in most of the state legislatures. So it was basically, even though it came afterwards, there was an informal agreement by the time The Constitution was adopted that there would be a Bill of Rights , what became the first ten amendments, that would limit the power of the central government. And so we have a Bill of Rights. Then we have supermajority requirements most obviously to change the Constitution, huge supermajority requirements. To amend the Constitution it's very difficult. You need two-thirds in both Houses of Congress and then three-quarters of the states. Some of your parents were around at the last serious effort to amend the constitution, namely the ERA, the Equal Rights Amendment, which failed to pass that threshold, very hard. But then we have other supermajority requirements, the filibuster rule in the Senate which we may see come into play in the Supreme Court confirmation hearings. That's not in the Constitution, but it's a supermajority requirement, nonetheless, that we've embedded in our institutional scheme. Anytime you add supermajority requirements you make it harder for the current majority to work its will. Separation of powers, already alluded to: the Court versus the executive, the legislature versus the executive. The notion that ambition of players in one branch will counteract the ambition of players in a different branch. Greatly debated subject, how effective can the branches really be to check one another? After all, the Court does not have an army at its disposal. You guys are young enough to remember the 2000 election when we had a knife-edge result and partisans on both sides were saying their candidate won, and it was being litigated in various ways through the state courts in Florida and we didn't have a clear result. And finally the Supreme Court in Bush v. Gore ruled in a very controversial decision that President Bush was the winner, or then-candidate Bush was the winner and Vice President Gore was the loser. And Gore stood up on national television and he said, "I accept the result. I don't agree with the result, but I accept the result." He didn't have to. You could imagine in many countries at that point, the Clinton Administration would have sent the tanks down Pennsylvania Avenue. What would the Court have been able to do? Nothing. It's far from obvious in Iraq that the loser of the election is going to accept the result. So there's great debate. Madison makes heavy weather in The Federalist Papers of the proposition that just writing something on a parchment doesn't guarantee that people are going to accept what you write. So what does it really mean to say that there's separation of powers because the Court, ultimately, is dependent upon people just going along with what it says. And likewise with the legislature, yes, there's this attempt to separate--Congress votes. In theory only Congress can declare war, and Congress has to fund the military, but in practice it's very difficult for Congress to resist what the executive branch wants to do on all of those things. And when we look at cases where the Court faces down either the legislature or the executive branch it's usually only in cases where what they're doing is very popular. When they're telling Nixon to turn over the tapes, 1973-'74, it's a unanimous court and it's a popular action. If Nixon had been at 95% in opinion polls at that time it's less clear that the Court would have faced down the executive branch, or at least so some scholars claim. So how much you really get separation of powers is a subject that's greatly debated by political scientists, because at the end of the day, despite what Madison says separation of powers is just something written on parchment. As Dahl puts it in his critique of Madison in the preface to Democratic Theory, his most important book which was published in 1956 and is still in print. Very few of us can say we wrote a book fifty years ago that's still in print, but there it is. It is still in print. Dahl says the problem with this famous one-liner of Madison's is that there's actually no real mechanism to ensure that ambition will counteract ambition, and it's rather simply that people accept this scheme. But what if they didn't? What if in 1800 there had not been a turnover of power? We had a knife-edge election in 1800. Maybe that would have made America much more prone to the seizure of power by those who currently controlled the levers of power, the military and so on. The loser gave up in 1800 and we began to create this culture of democratic turnovers, where the government loses an election and gives up power. Very unusual thing, but it happens. So separation of powers, much debated, and I'm going to come back to this, much debated as to how effective and important it really is. More veto points. We have bicameralism. As you know from reading The Federalist papers, the founders were mostly worried about the power of Congress because it seemed like it was going to be the most powerful branch. The executive was weak and designed to be weak, much weaker than it is today, and so they thought the thing to do is to divide Congress, have a bicameral system, and legislation has to be passed in both houses and have them elected by different rules, so we have Senate seats elected at large in the states, whereas we have small congressional districts and therefore we have very different incentives that the politicians are going to react to, very different factions overlapping with one another in different ways to get back to the basic pluralist ideal. And then finally, of course, last but by no means least in terms of veto points, federalism. Federalism is another source of veto points in what can actually happen. Think in the last few years of the debates about gay marriage. We have marital law as state law, and for a long time there has been a federal law which says if you get married in one state the marriage will be recognized in every other state. So if people move and then later on they get divorced or one dies it doesn't matter what state you were in for the purposes of the divorce or for probate law. You're still going to be governed by--it's just this sort of utilitarian efficiency thing. Every state can have differences in its marital law, but California's got to recognize Connecticut's marriages as valid. Then along comes a socially divisive issue like gay marriage and unsurprisingly some states want to enact gay marriage and some don't, but then this federal requirement that all states recognize one another's marriages suddenly becomes ideologically charged. Whereas it was presented as just a utilitarian efficient thing, now it becomes ideologically charged because if the state of Massachusetts recognizes gay marriage then it implicitly means that the state of Georgia has to recognize it as well. So you get an upsurge of political activism, and finally the Defense of Marriage Act in Congress to preserve the rights of the states to reject or accept gay marriage. And so that's an example of the veto power of states in play that rears its head in a setting like that. And so what you have in the US is not a democracy, certainly not a pure democracy. You rather have--what they designed was what they referred to as a non-tyrannical republic. They thought that that was the best they could do. They had misgivings about a lot of it, but they thought that this was the only way they could create the union and head off civil war. Of course they didn't head off civil war. We had civil war anyway, a subject I'll come back to on Wednesday. But we created this hybrid between a non-tyrannical republic, if you like, and a democratic system. And many of our arguments about contemporary democracy are really arguments about how much we should preserve this hybrid and how much we should have a thoroughgoing democratic system. And I'll take that up on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
15_Compensation_versus_Redistribution.txt
|
Prof: So today we're going to finish up talking about Nozick, and that will give you time to spend your spring break reading Rawls, which is, I know, what you will all do. But I want to come back to something I mentioned briefly about Nozick, and just reiterate that he thinks that in order to have a convincing account of justice it really has to have three components. One, what he calls justice in acquisition. Two, what he calls justice in transfer. And three, what he refers to as the rectification of past injustices. You need accounts of all of those things. How people got to have what they have, what should govern transactions among them, and then what should you do about past injustices. And actually in what we've done already you've got the main components of his account in front of you, or in your notes from last week because his theory of justice in acquisition is a theory that's intended to respond to the left critique of markets that I'm going to talk about shortly. His theory of justice in transfer is the Pareto system, and his theory of the rectification of past injustices is a discussion of compensation that we had last time. So let's think first about this idea of justice in acquisition. What he has in mind here is the left critique of markets. What he has in mind is the garbage-in/garbage-out phenomenon that the Pareto principle is a purely procedural principle that takes for granted some status quo. We worked through this with the bag lady, and Trump and all of that when we did the Pareto system. And he has actually an ingenious answer to this critique, and he uses as his way into it the Wilt Chamberlain example. Anybody remember what Nozick says about Wilt Chamberlain? Who was Wilt Chamberlain? Who was Wilt Chamberlain? Yeah? Student: Basketball player. Prof: He was the best basketball player in the world until Michael Jordan came along, right? Student: Until Larry Bird came along. Prof: Until Larry Bird came along. Well, fortunately Nozick didn't have to adjudicate that one. And so Nozick said, "Chamberlain is making a contract with the owner of a team just like professional players do, but this contract has a side agreement to it. Anyone remember what the side agreement was? What was the side agreement, anybody? Yeah, over there? Student: Wasn't it that Wilt Chamberlain gets a quarter of every dollar that the team gets in ticket sales? Prof: Exactly right. So on nights when Wilt Chamberlain is playing in a home game, the ticket price goes up by a quarter, and that quarter literally gets dropped into a box and gets given to Wilt. And apart from that, his agreement is like any other contract. So obviously his agreement with the team owner is a typical player's agreement, and it's a Pareto superior transaction, and there's no question about that. But the reason Wilt Chamberlin gets this quarter is because he's a charismatic figure and everybody wants to see him. What is Nozick doing here? What is the point of this? What's he trying to show us? Anyone want to--it's not immediately obvious, but it's quite brilliant. So here's the point. The point is, over time Wilt is going to get a lot of money from these quarters because he's the best basketball player in the world and people want to see him. So he's going to get richer, and richer, and richer. And so maybe, let's say, after five seasons of this, he has millions of dollars in these quarters that have accumulated. These people have been putting this quarter in the little box that goes to Wilt, millions. Why is this a good example for Nozick? Yeah? Give her--yeah, over there, yeah. Student: Well, perhaps he's showing that even though this isn't an egalitarian distribution of wealth, it's mutually agreed-upon by all parties involved so it's okay. Prof: So that Pareto superior transaction every time you put that quarter in that slot, you make a voluntary choice, and he makes a voluntary choice to come out and play and maybe smile at you or something, and both your utility goes up. So it's in that sense just like any other Pareto superior transaction, so why the whole rigmarole? What else follows from this example? Think back. Think back to what is his real target here. His real target is the left critique of markets which says, "If you start out with unjust initial conditions, if your system of justice in acquisition isn't met, then it doesn't matter whether your transfers are voluntary." So what's he doing here? What the point of this example? Think. Anyone want to have a go at it? Yeah? Student: That what we'd consider an unjust distribution of goods or wealth may actually result from what was originally a just distribution. Prof: Correct. His bumper sticker is, "Liberty upsets patterns." Liberty upsets patterns, this is his point. He's saying, "Everybody gets so excited about the unjust initial conditions problem. I'll tell you what. I don't know what just initial conditions are. I have no idea, I Robert Nozick. You pick. You're an egalitarian, fine, start with pure equality. You believe in meritocracy, fine, start with a meritocratic distribution. You pick. I really don't care." So let's, for the sake of argument, just say you're an egalitarian. So you pick equality, strict equality. Everybody starts off with the same amount. Now we're going to allow voluntary transactions. You know what's going to happen? You're going to get inequality because some people are going to put a quarter in the slot to see Wilt Chamberlain. So this stuff about the big problem with markets, the problem of starting points, the problem of initial conditions, it's just a red herring. It's a sideshow. It's bunk. Because, in fact, even if you pick-- so you're the egalitarian, just for the sake of argument, and you pick the starting point that you are convinced is just, and we allow markets to run, it's going to be undone just by the voluntary choices of individuals. "So, I, Robert Nozick, say to you, we'll have a little syllogism. If we have just initial conditions and voluntary transfers then the outcome must be considered just." That's Nozick's little syllogism. You have just initial conditions and voluntary transfers then the outcome must be just. It's just a shell game to get all hung up on the initial conditions, however, because you can pick them and we're still going to undo what you like. So you, the egalitarian pick them. People go to basketball games, and what do you know? Five years later you have a lot of inequality, and then the only way you can fix it is coercion. The only way you can fix it is taking some of that away from Chamberlain in the form of taxes and giving it back to those people in the form of some sort of transfer payment, and that violates his freedom, uses him to benefit them, nothing else. So liberty upsets patterns. When he talks about a pattern conception of justice it's one where you specify some distribution whether it's strict equality, whether it's Bentham's practical equality, whether it's some other distributive system, whether you specify in advance what the outcome is, he's saying that's a pattern conception, and the problem with it is you could only maintain it over time with coercion. Whereas, if you value freedom of the individual, liberty, remember Nozick is another one of our Enlightenment theorists so individual freedom, individual rights is the highest good, then you have to accept whatever voluntary transactions generate starting from a system which you have conceded is just. So in a way he turns the table on the left critic of markets by calling their bluff, and it's a brilliant argument. Now you could say, "Well, this is a philosopher's example. After all, we never start from a system that is universally accepted to be just." But I think he would make two points in response to that. One is, "I'm showing you that even if you did you would still have this problem that over time it would get undone by the voluntary transactions of individuals." And secondly, Nozick wrote this in 1974, but I think if he had lived--well, he did live long enough. He died about three years ago, but if he had been writing this book after the early 1990s, he'd have had some very good real world examples of what he was talking about following the collapse of the Soviet Union, because in a lot of East European countries, such as Czechoslovakia and East Germany they had these huge state enterprises that they had decided to privatize. And what did they do? Well, they basically privatized them on an egalitarian basis. That seemed fair. So there would be these giant factories that had been previously state owned and everybody would get their little chit. Everybody would get their share of that factory. It was very widespread in Czechoslovakia and Poland in particular, but also in some of the other ex-communist countries. And what happened? Well, some people didn't know what they were, probably rolled cigarettes with them, some people put them under the bed, some people threw them away, some people put them in shoeboxes, some people started buying them up, started going around and buying up pieces of this factory that had been given out as a result of the privatization. And what do you know? Five years later, ten years later, some of the people who had figured out this might be a good thing to do were millionaires, and other people basically didn't have anything. So it's not a completely fanciful example, right? And Nozick would say, "There you go. What was decided as a fair distribution of the state assets at the moment of privatization was accomplished, but then markets were allowed to run, and some people took risks, and some people were creative. Maybe some people spent all of their savings buying these things up so they internalized some risk, they might have turned out not to have any value, and they got the benefit. And so there you go, five years later you have inequality. Liberty upsets patterns. Now are we going to go and take the millions from the people who bought up those chits and give it back to the people who rolled them and smoked as cigarettes, or gave them away, or lost them, in order to recreate the pattern? Nozick says no, right? And this is what he means by comparing what he calls historical principles or procedural principles, on the one hand, with pattern principles or end-state principles on the other hand. An end-state principle specifies some result, some end-state, some condition, some teleological outcome, to use the jargon from last time, whereas a historical or a procedural principle specifies no pattern at all and says whatever is generated by market transactions is just. And this long two hundred-year-old critique of them based on the status quo, we now see from the Wilt Chamberlain example, is just smoke. What about that? It's a powerful argument. Anyone not like it? You all like it? What's wrong with it? What makes you uncomfortable? Can you give her the microphone? Can you right next--can you just give her--yeah, okay, thanks. Student: I'd say that the thing that's being distributed doesn't necessarily have to be money. If you change the thing being distributed to, I don't know, education, for example, that having equality there might be something that would affect outcomes more than starting from a concrete equality. Professor Ian Shapiro: And why should we care about that? Student: Well, I guess it's kind of a way of changing the process as opposed to... Professor Ian Shapiro: Okay, so that's fair enough, so that's one thing somebody might like about it. Anything else anyone doesn't like about it? Student: Well, Wilt Chamberlain is kind of like a self-made... Professor Ian Shapiro: You've got to talk a bit louder. Student: Wilt Chamberlain is kind of like a self-made person. He gets there because of his basketball talents, but it kind of ignores the problem of people who are born into incredible wealth or born into incredible opportunity. Professor Ian Shapiro: How does it ignore it? Student: Well, I feel like in the situation with Wilt Chamberlain he realizes that he deserves what he gets and you shouldn't take it away from him, but I don't know if you can argue that having being born into a family that has a lot of money, or being born somewhere where you can barely put food on the table is necessarily someone's more deserving of having one or the other. Professor Ian Shapiro: Okay, so, but that's what he thinks he's answering, right? Because he's saying, "Okay, so you're worried that some people are born with a lot of resources and some are born with nothing." He's saying, "Fix it however you want. So we'll start off by redistributing and we'll start off with pure equality. If you allow markets to run, it's still going to become unequal over time," right? That's his point. So what don't you like about that? Student: Well, I feel like there's a problem in the way that-- maybe it's that he believes that the markets are truly free in some way and that there are really free transactions, but I feel like there's not necessarily going to be, ever. Prof: But isn't that why he uses the Wilt Chamberlin and it's only a quarter? It's only a quarter for crying out loud, and you didn't have to give it. It's a trivial amount, but there it is five years later, he's the millionaire and you're still working in a factory. You're not comfortable with it? Student: No. Prof: But you're not sure why. Anybody else want to say what? It's a very clever example. It's an example of where the philosopher's example really cuts to something important. Yeah, what's wrong with it? Student: Well, it seems like Wilt Chamberlain is an exception. Prof: Wilt Chamberlain's an exception, why? Student: Because the poor will tend to stay poor and the rich will tend to stay rich, and just because there's occasional disruption of the pattern doesn't mean the pattern will be fluctuating all over the place. Prof: Okay, so he's unusual, but what about the-- okay, lets grant your point, but what about the example I just gave of Poland after 1989? Everybody starts out equal and it's just that some people see an opportunity here and they get rich, and other people don't and they lose out. Student: Well, Nozick is relegating the principles to the very beginning. He's saying you can only apply your principles at the beginning and he doesn't let you do that as a continuous thing. Prof: Well, I don't think he'd back down that quickly. He'd say, "Look, the point of this example is I'm letting you define the beginning." So after 1989, the Polish government defines the new beginning. Everybody gets an equal share of state assets. We're creating private property for the first time. Everybody's getting an equal share, but five years later we have inequality. Yes ma'am? Student: To use, in particular, the example with the shares of factories in the Soviet Union, I think that Nozick is perhaps making the false assumption that everyone is going to have equal ability to make use of that fair distribution at the beginning, when in fact people who are poor are not going to be able to buy up the other shares of the factories, and it's going to exacerbate inequality that was already there. Prof: Well, your point is well-taken, but notice I said Czechoslovakia and Poland because in Soviet Union what tended to happen was bureaucrats from the old system used their special position in the bureaucracy and their intimate knowledge of what was going on to enrich themselves and grab a lot of these assets. So I didn't want to go with that example, but in the East European countries there was a lot less of that, and it was more of a sort of genuine new beginning. But you still, you look at a country like Poland and it rapidly became massively unequal. Student: Well, that makes sense, but I would assume that those small inequalities that existed in Poland at the beginning of that redistribution ended up creating those massive inequalities. Professor Ian Shapiro: That's right. It's just like one quarter at a time. Student: Right. Professor Ian Shapiro: Okay, but that's Nozick's point. You don't need some massive injustice to be done at the beginning in order to get huge inequalities over time. Let's modify the Wilt Chamberlain example a little bit. Well, before I do that, how many people think that the outcome in the Wilt Chamberlain is just fine, so he gets rich, but people want him to get rich? So okay, how many think it's not fine? Okay, a few, so now let's play with it a little bit and see if your opinion changes. Let's just imagine it's a single company town, okay? It's a single company town. All these people who go to these games basically work in the only factory in town. And let's just imagine that after five or eight years of this when he's getting close to retirement, and he's not playing such good basketball anymore anyway, Wilt says to himself, "You know what I'm going to do? I'm going to buy the only factory in town," and he buys the factory in which they all work. And so he now owns the factory, Pareto superior transaction, voluntary transaction. He uses these millions he's accumulated from all of their quarters to buy the factory. But times are tough. There's globalization coming along. The factory's losing money, and his manager says to him, "You've got to massively cut your wages here." And so he gets into a confrontation with the workers, and they won't back down, and his company he thinks is not making as much money as it should, and so he says, "Well, unless you take a seventy percent pay cut, I'm moving this company to Mexico." So they are, in effect, forced to take a seventy percent pay cut as a result of this transaction. Now what about that? What would Nozick say? What would Nozick say about that? Nozick would be fine with it, wouldn't he? Yeah, that's the way the cookie crumbles, right? "Losses must lie where they fall" is a famous line from an American judge called Learned Hand. That's what happened. They chose to give him all of this money. Maybe they didn't anticipate what he was going to do with it, but that was then and this is now, and he's entitled to do whatever he wants with his money. How many people think that changes this thing? If we doctor Nozick's example in this way does anybody think it changes anything, somebody who previously thought it was okay but now is less sure? There's nobody? Yeah? Why? Let's get the mic there yeah, over here, yeah? Yeah. Student: I don't have a real explanation. I just think it's morally wrong because it's kind of coercing the people to go by his money and everything, while before he was just kind of earning money for his own happiness, and his own utility, I guess. In this case he's using his money and it seems to have bad moral implications, so that's why I'm slightly more uneasy with this outcome. Professor Ian Shapiro: So it's what he's doing with it. He's taking advantage of the situation in some way? Student: I guess so, yeah. Professor Ian Shapiro: Well, anyone want to disagree with that? Yeah? Student: Well, I think it changes the situation in a different way in that now we're have coercion of a group of people to keep the pattern of Wilt Chamberlain being in charge intact. So even though it does seem at first glance like a mutually beneficial arrangement it seems as though people are being coerced into doing something in order to maintain a... Professor Ian Shapiro: But where's the coercion? Student: Well, they're being coerced into taking this pay cut even though it seems as though a really bad move for them, so... I mean, they have no choices, so I guess they've been back into this corner, but... Prof: Well, they have the choice of not being employed by Wilt, right? Student: I guess so. I don't know. Prof: We have this essay question which some of you are probably working on. Joan Robinson's line, "The one thing worse than being exploited is not being exploited," but he's saying, "I'm choosing to close down my factory and move it to Mexico. I'm not coercing you. I just don't want to run a factory at a loss." Most people wouldn't allow the market to run indefinitely. So for instance, we have laws if you have a hurricane and there's no water in a part of the country that's been hit by the hurricane, and the local hardware store with bottled water suddenly ramps up the price a thousand percent. We call it price gouging. It's an evil thing. But he would say, "Look, the demand for water just went way up. I have water. Why should it be price gouging?" So that's the point of the outsourcing example, right? The point is, at some point most people are going to say, "No, we can't just allow the market. We have to pay attention to the context in which people are buying water right after a hurricane. Through no fault of their own, the price of water has skyrocketed, and so we're going to have laws to punish people who engage in price gouging." Who thinks that we should have laws against price gouging? Yeah? Now, if we should have laws against price gouging shouldn't we have laws against the sort of thing that Wilt Chamberlain is, in my modification of Nozick's example, is proposing to do? Anyone think we should have laws against price gouging, but we shouldn't have laws limiting what--no? Is that right? What he's threatening to do is like price gouging, or anybody think it's not like price gouging? So what's the difference? Yeah? Student: No, I was thinking it's like Mill's harm principle a little bit, so maybe you could actually find for price gouging and for Wilt Chamberlain a reason to allow a government to say that you're not allowed to do it even though it seems perfectly fine as a market transaction. Prof: But what is the difference exactly? Student: I'm saying they're similar because they both involve harming people. Prof: They both involve harming people, but the price gouging is worse somehow than what Wilt is proposing to do? Student: I'm not saying that it's necessarily worse. It could be, but I don't have a definite reason to say so. Prof: So I think Nozick's point is going to be, "Well, anytime you're going to step in, anytime the government is going to step in and say this crosses some line where even though it's voluntary transactions, they're not voluntary in any meaningful sense because people are such extremists." The price-gouging situation is an example, but you're still going to have to say, "Well, how do we know where to draw that line? When is it going to be the case that the position we're putting people in and saying "choose" is so bad that the government steps in? Is it when they're about to die, or is it not quite that far? Where are we going to decide? What is the point at which we step in and start making interpersonal judgments of utility?" That's what we're really saying, right? We're saying, "At some point we're going to step in, the state's going to step in, but we don't have a very good account of at what point." Because if you say, "At no point is the state going to step in," then you're really back to saying in the example we talked about way back with the Edgeworth Box that the bag lady should die because she has nothing that Trump wants, and so that's the Pareto efficient result. But if you want to say, "Well, at some point before that point, the state should step in," you need some account of what it is. And given that things happen and circumstances change, just saying if the initial conditions were just and the transactions were voluntary the outcome must be just, isn't going to be satisfying in every instance because the status quo changes. There can be drift, if you like, utility drift, in the status quo. And different people will think the state should get involved at different points, but there are very people, I think, who are going to go all the way and say, "Well, the bag lady should just die," or "All these people should lose their jobs." Maybe that'll be a more difficult case for some of you than for others, but it doesn't seem very credible to say you're not going to have any point at which we're going to say the state should step in. But now Nozick would want to say, "Well, even if I conceded that, we really want to keep it to a minimum because the problem is we don't agree. The problem about having any pattern or end-state conception of what's allowable in society is we don't agree. We have deep pluralism of values in this society. What some people think is right other people don't think is right, and why should the people who are in the majority get to impose their view on everybody else, or indeed, why should people who are in a minority get to impose their view on anybody else? Why should anybody get to impose their view on anybody else?" That's his claim, and we saw in the last lecture that he made a distinction between redistribution and compensation, remember? He said that we would only get the minimal state as a result of forcible inclusion of independents if they could, in principle, be compensated making it legitimate. We wouldn't get anything more because people don't agree. So some people would say it's terrible for people to lose their jobs. Others would say, "Oh well, they should have thought about that when they were putting all those quarters in the box, that they're giving him power." So if you cast your mind back to last week's lecture on Wednesday, we ended up saying, "Well, Nozick's argument isn't as powerful as it looks because we could do more in the name of compensation than he wants to do," right? We could allow the creation of unemployment insurance. Remember there's the fear of being unemployed in all of that. So he might agree about that but say, "It's still better to think in a compensatory idiom." For one thing, yes, you might get unemployment insurance that way, but think of all the things the welfare state does. It gives protection to the disabled, it gives civil rights guarantees. These are all sorts of things that wouldn't come about because not enough people would want them. The disabled are a minority. The African Americans before the Civil Rights Act were a minority, so they wouldn't be able to force everybody to accept what they wanted so it wouldn't get done as a result of the kind of mechanism that creates the minimal state. There'd be a lot of things government currently does that wouldn't happen. And it's better to have compensation rather than redistribution, because compensation doesn't require us to agree about a pattern. It doesn't require us to agree on what kind of outcome is actually just. The whole idea of compensation is making somebody whole. If one of you walks up to the street and punches me in the face, and I go to court and sue you, the question is what damage did that angry student do to Ian Shapiro, and how much should Shapiro be compensated to put him on at least as high an indifference curve as he was before that, right? That's the only question. We don't have to ask the question, should he have been where he was before? Is the distribution of wealth in society that existed before Shapiro was punched by the angry student, was that just? We don't have to ask that question. If we're going to do compensation we just say, "How do we make Shapiro whole? How does the person who harmed him make him whole? How much money does he have to give him to make him whole?" That's the only question we have to answer. And Nozick wants to say that it's a much easier thing to deal with precisely because in a society where there's deep pluralism of values, where we don't agree on what's just, it's a much more limited inquiry into how do we compensate some particular individual to undo the particular harm that was done to them and make them whole. We don't have to agree on what the just distribution of wealth and income in this society is in order to answer that question, right? Is that a good argument that compensation is much less demanding metaphysically than redistribution, because compensation doesn't require us to agree on what is just. It just requires us to figure out how much is owed to Shapiro to make him whole after this harm has been done to him. Yeah? Is it a good argument? Student: Just because it's easy doesn't mean it's the right argument. Prof: Just because it's easy doesn't mean it's the right argument, but is it easy? Student: Well, you said it was, just metaphysically. Prof: It's easy in one sense. I mean it might be hard because I'm going to say, "Pain and suffering, and I was humiliated, and I've got to get extra money for that, and my girlfriend won't look at me because I've got a black eye, I should get extra money for that." We'll argue about this in court, so it might be hard in that sense, but it's easier in the sense that we don't need to agree on what my salary should be versus a banker's salary, versus--you know, we don't have to answer questions like that. We just want to know the number--we have to arrive at the number to undo the harm, to make me whole. Is that a bad argument? Yeah? Student: Well, it completely forgets the Enlightenment idea that there might be a right answer to anything, and just by saying that you're just dealing in compensation and not actually redistribution and figuring out... Prof: I mean, yeah, that's a good point, but I think Nozick would say, "I'm not saying there isn't a right answer. I'm saying we don't agree on what the right answer is, but I, Nozick, am saying one right answer's not to coerce people, so given that we have different values the answer is liberty. Let everybody have their own value and not impose one person's value on others." So I don't think he'd back down that quickly. Is there any other reason anyone might think this is--I think you're in the right direction, but you haven't quite nailed it. Yeah? Student: So, I mean, in terms of money, maybe it's easier to think about compensation, but when you're compensating people with goods or services you're really having to take those from somewhere else. They don't just appear out of nowhere. So, on some level you're already redistributing things when you're compensating. So it is really just another form of redistribution, but in another word. Prof: I think that's a good argument, but I think there's another one that I just want to leave you with, and this partly reflects when Nozick was writing. He wrote this in 1974, and in 1974, there was a lot more political stability in the world in some ways than there came to be subsequently. Of course it was unstable in the sense that we were in the middle of the Cold War, but the Cold War provided a lot of stability within countries that would subsequently become very unstable after the end of the Cold War. And so this idea that, well, fixing one injustice is something we can get our minds around, particularly if it means making people as well-off as they were before, whereas deciding what's just for the society as a whole isn't. It's a much bigger problem, and it is potentially much more politically explosive, because we're going to be, as Nozick says, "Liberty upsets patterns." We're going to be taking wealth from some people and giving it to other people, and the people we're taking it from are not going to like it. Whereas this compensatory idiom is backward-looking, it's focused and it's specific. But I think what he didn't take account of is that the backward-looking idiom runs into the problem of where do we stop. Where do we stop? After 1991, there were people in Russia who said, "All that property should be given back to the czar's children." After the transition in South Africa King Goodwill Zwelithini, the Zulu King, he said, "Well, all that land that was seized first by the British from my ancestors, and then under apartheid was seized again for the forced removals and all that, it should be given to me. I'm the descendant." And you only have to think about the Middle East to see that this idea that a backward-looking compensatory idiom is less politically explosive than a redistributive idiom as a political matter isn't true, because it all depends how far you go back. The Palestinians say we go back to 1967, or then there's the debate about going back to 1948, and there are many people who are there that say, "We should go back to the Old Testament as to where we should undo all the injustices that are piled up upon one another over time." So this idea that the compensatory idiom is less demanding than the redistributive idiom I think is politically naive and ultimately philosophically not tenable because you're going to have to make a decision about how far back to go before you decide that you have the benchmark for compensation. You only have to look at the vexed history of affirmative action which was, you know, read Randall Robinson's book, The Debt, the vexed history of affirmative action, to see how difficult a backward-looking compensatory idiom turns out to be in practice. Enjoy the midterm and we'll see you three weeks from today.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
6_From_Classical_to_Neoclassical_Utilitarianism.txt
|
Prof: This morning we're going to begin making the transition from the classical utilitarian doctrine of Jeremy Bentham's to the neoclassical doctrine that was championed by a number of different figures in the late nineteenth and early twentieth century, and we're going to end up by focusing on John Stuart Mill as the principle expositor of neoclassical utilitarianism. And where we're headed is for a doctrine that I'm going to call the rights-utility synthesis. The rights-utility synthesis signals that we're looking for an attempt to put together both a commitment to utilitarian efficiency that's grounded in science on the one hand, and respect for individual rights that's grounded in the workmanship ideal on the other hand. And we're not going to actually get to the rights-utility synthesis as it's expressed in politics by John Stuart Mill until next Monday. What instead I'm going to do today is explain how the transition for classical to neoclassical utilitarianism really went on in all fields of thinking about the human sciences at more or less the same time. There were developments in political theory that we're going to talk about at considerable length, but there were also, under-girthing that, developments in economics and in philosophy that are going to be my principle focus in today's lecture. What you're also going to get as a by-product of today's lecture is everything you ever needed to know about neoclassical economics in 45 minutes. That is to say neoclassical economics is a brilliant intellectual creation, to a large extent the creation of Vilfredo Pareto, an Italian economist that I'm going to talk about today, but with important contributions from other figures in the modern history of economics such as Marshall, and I'm going to mention in some detail an economist called Edgeworth. And they developed a system of thinking about economics and the theory of value that was going to be tremendously influential and important not only in the way utilitarianism evolved, but in the way of our thinking about markets, legitimacy, and distributive justice would evolve. At the same time, more or less, there were very important developments in moral philosophy that I just want to alert you to, that we're going to return to later when we come to consider Alasdair MacIntyre's book, After Virtue. And this movement in philosophy that I'm mentioning here is the doctrine that would come to be called emotivism. It was associated with a man by the name of Stevenson who wrote several books advocating the emotivist doctrine when he was an untenured professor in the Yale Philosophy Department, and as a by-product of which, he never became a tenured in the Yale Philosophy Department because his doctrine was thought to be so repugnant. His doctrine was that when we make claims like murder is wrong, we're making claims that express our emotions, our emotive reactions to propositions just as when we say, "I like ice cream," or "I prefer chocolate ice cream to strawberry ice cream." All we're doing is expressing our tastes, our emotional reactions and that there is nothing more to say about ethics than that. Now, you could say this emotivist doctrine is an endpoint in a philosophical evolution that really begins in the seventeenth century. Hobbes, who I mentioned to you in our very first lecture, criticized Aristotle for not seeing, as Hobbes thought, that what is desirable for some people is not desirable for others. But Hobbes didn't, truth be told, take that view that seriously, because he thought for the most part we're all pretty much the same. And if you look at the other classical utilitarians like David Hume, who we're not reading in this course but we could read in this course, or Sedgwick who we're not reading in this course but we would have read in this course, they also basically thought human beings were more or less all alike in their psychological structure in their basic human needs. Hume has a famous line somewhere to the effect that, "If all factual questions were resolved no moral questions would remain." "If all factual questions were resolved no moral questions would remain." And it's not that Hume thought we could derive an ought statement from an is. Hume's famous for the idea that an ought cannot be derived from an is; that there's a fact-value problem. But he thought, nonetheless, people are pretty much the same, and so if you can figure out what makes one of them tick you can figure out what makes all of them tick. And that was most emphatically Jeremy Bentham's view. It's presupposed in everything we discussed last time. If you think about the idea of doing interpersonal comparisons of utility, and making the judgment that taking that dollar from Donald Trump and giving it to the bag lady increases her utility more than it decrease his, you're assuming that they basically all have the same kinds of utility functions. Stevenson questioned that idea radically. He said, "We don't actually know. We should take Hobbes much more seriously in his critique of Aristotle than he was willing to take himself. We don't actually know if people have the same kinds of utility functions. We don't know whether or not what makes some people happy will make others happy as well." And so Stevenson was thought to be a proponent of a kind of moral relativism because he linked ethics to our desires, and preferences, and emotions, and nothing else. And then he said, "It's actually an open question." Stevenson was criticizing Hume on this point, but he might as well have been criticizing Hobbes or Bentham. Stevenson said, "It's an open question whether people are alike in their basic psychological structures in the physiology of their human needs." And so that was thought to be a radically relativist doctrine because it seemed to undermine the possibility of making ethical judgments of any sort across people. That is a doctrine to which we will return, as I said, when we get to the anti-Enlightenment and, in particular, Alasdair MacIntyre's book, After Virtue. But today we're going to focus for the rest of our time on the economics of the transition from classical to neoclassical utilitarianism. And I'm going to ask you to suspend disbelief for the rest of today's lecture and just trust me, because what I'm going to do is I'm going to go into this backwards. I'm going to come into the transition from classical to neoclassical economics by looking at a very different problem that the neoclassical economists were concerned with that had nothing to do with utilitarianism, or rights, or anything that we've been talking about in this lecture. And it's not until you get to the end of this narrative that you'll start to see why the transition from classical to neoclassical utilitarianism in economics was essential for the transition in political theory, and indeed for the transition in moral philosophy. So, as I said, you're going to have to suspend your disbelief and just follow me through the ABC's of neoclassical price theory, which is what we're going to do now. And as I said, the bonus here, the by-product is, you're going to get the whole of ECON 101 reduced to a single lecture. Because indeed it is true that enormously complex and subtle, and as sophisticated as the neoclassical theory of microeconomics is, it's all built out of three ideas. It's all built out of the three ideas that I'm going to spell out for you in what some of you might initially regard as laborious detail, but I'm going to do it anyway, and I think you'll see what I'm getting at once we get towards the end of today's discussion. So imagine a single person. We're going to call them A as testimony to my lack of imaginativeness, but you could call them anything you like. And let's imagine a world in which there are just two commodities; in this case wine and bread. The system of neoclassical utilitarianism invented by Pareto says that, other things being equal, you want more rather than less of any source of utility, right? We know that from classical utilitarianism. But if you have six bottles of wine and only one loaf of bread, bread is comparatively more valuable to you than wine, so that you would exchange a lot of wine to get a second loaf of bread. If, on the other hand, you were choking on your six loaves of bread and dying of thirst the reverse would be true. You would give up a lot of bread in order to get a small amount of wine. And these are what are called indifference curves in neoclassical economics. And indifference curves basically imply exactly as the name suggests that you would be indifferent among the mixes of bread and wine anywhere on this curve. And this curve is always shaped that way, concave toward the origin. Anybody want to tell us why? What is it reflecting? Why is it that shape? Someone who's done an econ class, or somebody who remembers Monday's lecture, yeah? Student: Because of diminishing marginal utility. Prof: Yes, okay. It reflects the idea of diminishing marginal utility. It reflects the idea of diminishing marginal utility in exactly the sense that I just said: that if you have a huge amount of bread the next loaf of bread is less valuable to you at the margin than the previous loaf of bread was. So if you've got a lot of bread you'll give up a lot of bread in order to get a small amount of wine, okay? So that is the idea of diminishing marginal utility. And when we call this, what I've labeled here I-1, an indifference curve, we're saying literally you would get the same amount of utility no matter where you were on that curve. So if you have whatever we have here. This isn't very well-drawn but, say, four bottles of wine and two loaves of bread you would be equally happy as if you had, what does it look like here, four loaves of bread and one and a quarter bottle of wine. You would be equally happy between those two distributions, okay? What would increase your happiness? What would increase your happiness would be get onto a higher indifference curve. If you could have more bread and more wine of course you'd be happier, right? And so the idea of indifference curves is that you want to go this way. You want to go from P toward Q. You want to get onto, as they put in the jargon of neoclassical theory, you want to get onto as high an indifference curve as you can possibly get. For those of you who like the jargon, this would be a utility function. You want to go up your utility function, all the way to Q if you could. We don't know where Q is. It's out in the stratosphere. But wherever you were on your utility function you could draw one of these curves through it in principle, so if you were here, you could find the mix of bread and wine in each instance among which you're going to be indifferent. So that is the notion of an indifference curve. Now, an important consideration in the theory of indifference curves was to say that we don't know-- I've put these equally apart, but, in a way, I shouldn't have because it's misleading. If you get from one to two and then you get from two to three you haven't increased your utility necessarily by the same amount. These distances don't mean anything, okay? I could have put three right here because the system of neoclassical utilitarianism, unlike the system of classical utilitarianism, works with ordinal scales, ordinal mathematical scales, and as the word implies it means all we do is rank order. We rank-order our preferences, but we don't say anything more. So this individual A prefers four to three, three to two, and two to one, but we can't say that he prefers or she prefers four to three more than she prefers three to two. We don't know that. We don't have a cardinal scale. Remember that in Bentham's system we had cardinal scales. We were thinking of sort of lumps of utility that could be picked up and moved around and redistributed to people, right? The neoclassical economists didn't want to do that, and they didn't want to do that for a different reason than anything I've talked about in these lectures. They didn't want to do that because they were actually concerned with quite another problem. The problem they wanted to solve was to understand the behavior of markets. They wanted to be able to more precisely to predict what prices were going to be in markets, and they wanted to do that for reasons I'm going to elaborate to you much later on when we come and talk about Marx, and the labor theory of value and its limitations, but that's for a future lecture. For today's lecture all you need to concern yourself with is the fact that they wanted to be able to understand the nature of markets of how market prices move, but they wanted to be able to do this with as little information as possible. They realized that for Bentham's system to work, for example, the government would have to have a kind of utilitometer and run around sticking it under people's tongues to measure their utility, right? Very intrusive. You need a lot of information to do Bentham's system. They wanted to say, "How can we develop a well articulated theory of market prices based on as little information about people and their preferences as possible?" And Pareto, and Marshall, and Edgeworth, and others who were in their circle, thought you could do this just with ordinal utility. So moving from cardinal to ordinal utility is going to turn out to have huge ideological consequences, which I'm going to unpack for you towards the end of today's lecture. But as an analytic matter, looking at this from the inside, it had the great virtue of providing the building blocks for a theory of price behavior in market systems that required almost no information about people. All we would know about this person A, as I said, is that they prefer four to three, three to two, two to one, one to zero, but we can't say anything about how much they prefer those things because these distances don't actually mean anything. All we get is an ordered ranking. Now, there is one other thing we can say. One other thing we can say is, that this is a no-no. These indifference curves cannot cross. Can anybody tell us why? Why can't they cross? Wait for the mic. Student: Because at the intersection they should have the same utility even though they're different indifference curves. Prof: You're on the right track, but what's the problem with their crossing? Student: Because you say I-2 has utility of two, I-2.5 has utility of two-point-five, but at that point where they intersect they both have to have the same utility. Prof: So you've got a kind of contradiction on your hands, is that right? Student: Yeah. Prof: Okay, and just to spell out the contradiction more emphatically--I think you basically made the point. If we're saying that we're indifferent among all the things on this curve and we're indifferent among all the things on this curve we can't have it cross because then we're saying here, right, two-point-five is preferred to two, but here we're saying the opposite. We're saying two is preferred to two-point-five, okay? The jargon, anybody happen to know? Yell it out. We don't need the mic. Does anybody know the jargon for this? Student: Transitivity. Prof: Transitivity. The preferences are assumed to be transitive. So if you prefer A to B and B to C, you must prefer A to C. That's all that transitive means, okay? If you prefer A to B and B to C, it must be the case that you also prefer A to C, otherwise you're contradicting the principle of transitivity, okay? So we cannot have these indifference curves crossing one another. Now, what we're going to do here, instead of one person and two commodities, we're going to think about two people, okay? We're creating a diagram with two people on it. As I promised you earlier in the semester, anything I do with a diagram I will also do verbally, so if you find this in any way confusing just listen to the narrative and then we'll see whether you get it that way. But so now we have a diagram with two people on it, okay? So this is person A, and this is person B. And these axes, the X-axis, here, is A's utility function. Remember A in the previous slide was trying to get from P towards Q, right? A was trying to go up here. So this, on this slide, is the same thing as this, on this slide. So A is trying to go this way and B is trying to go this way, okay? And what we imagine is some distribution of utility between them. So A has this much utility--if this is the status quo X, okay? A has this much utility and B has this much utility. A's happier than B, right? Wrong, A's not. We don't know that A's happier than B from what I just said, right? These distances don't mean anything, right? So it looks like A's happier than B, but that's misleading. If the different distances are taken to imply in your mind that A's happier than B disabuse yourself of that thought right away, okay? So we have a distribution here, okay? Now what Pareto said, he said, "Let's draw a line north-south through the status quo, and let's draw a line east-west though the status quo," okay? And we'll imagine that there's a finite source of utility. It gets called in the econ textbooks the Pareto possibility frontier, so there's not an infinite source of utility. Now, Pareto said, "Well, if we draw the north-south and the east-west we get four quadrants. We get this one. We get this one. We get this one, and we get this one," right? And what Pareto said is, "Well, that's interesting because we can say different things about them." One thing we can say is if you can anywhere into the northeast quadrant both of them are better off, right? So if we go from X to Y we know A's utility has gone up, and we know B's utility has gone up, right? We don't know by how much, but we know it's gone up so they're both better off. On the other hand, if we went anywhere in here, this quadrant, southwest as it were, obviously they're both worse off, okay? Because if I put a point here, Q, we'll let's not use Q--J, let's not use, yeah, let's use J. If I put a point here, J, we would say that A's gone down and B's gone down. Now, to make this a bit more real imagine in here this is the sphere of market transactions. This is where A and B will go voluntarily, right? So A will say to B, "Well, I have all this wine, and you have all that bread, how about I swap you a bottle of wine for a loaf of bread?" And you say, "Okay." You give it to them, both people are better off, okay? And we know they're both better off because they did it voluntarily, and we know they're both trying to get onto as high an indifference curve as possible, right? So they swap their wine, they swap their bread, and both of them are happier. A little more tipsy, but also a little better fed, okay? A move into here would be as if the government taxes them both and uses the money to spend on foreign aid to a country they both despise, let's say. So they both paid a tax and the money has gone to something they don't support. We can say that's Pareto inferior. Pareto superior, Pareto inferior, right? It's Pareto inferior because they both don't want it, and both of them would resist it if the government tried to do it. Obviously they wouldn't go there through a market transaction because it puts both of them on a lower indifference curve, okay? So that's all well and good. Well, that leaves these two other quadrants. And about those two quadrants Pareto says we can say nothing at all. We can say nothing at all, at least nothing scientific. And then he says in his famous 700-page book called The Manual of Political Economy, he says, "People are going to misinterpret me. People are going to interpret me as saying we should never move into either of these quadrants. I'm not saying that. All I'm saying is we will never have a scientific reason for moving into either of these quadrants. Because if we were to move from X to G, so that we tax A by that amount and we give it to B, we cannot say that B's gain is greater than A's loss because these distances don't mean anything despite where I put the G. We just have no way of knowing because we don't allow interpersonal comparisons of utility. And that's the link to Stevenson in philosophy that I was talking to you about earlier. There's no way of knowing whether B's loss is as big as A's gain or the reverse, because we can't make comparisons across individuals. There's no scientific way to do it. We can't assume with Bentham, and with Hume, and with Sedgwick, we can't assume that everybody's basically the same. Perhaps they are, perhaps they aren't, but we just don't know, okay? So that is the Pareto principle. The Pareto principle out of which the whole of neoclassical economic theory was constructed depends on this idea of indifference curves. A is trying to get up on those indifference curves. B is trying to get along on these indifference curves. And we have Pareto superior, Pareto inferior, and then these two which he called Pareto-undecidable because you can't decide, not because you're a ditherer, but because there's no scientific way to tell whether, in this case, B's loss exceeds A's gain, and if we went up here it would be an analogous problem. Now, let's just suppose we go back that A proposes to B swapping a loaf of bread for a bottle of wine and B agrees. They go to Y, and then A says, "Well, I'll give you another loaf of bread for another bottle of wine." And B says, "Forget it." He says, "Well, come on, how about a half a bottle of wine?" He says, "Okay," and then they go to Z, okay? And if they then get to a point at which no matter what swap A is willing to propose, B says no, and no matter what swap B is willing to propose A says no, then you know they've hit that frontier, what's called the Pareto possibility frontier. Because now there's no way to make B better off without making A worse off, okay? They will--"What about a half a bottle? What about a quarter of a bottle? Well, then I want two-thirds of a loaf. No, that's too much," blah, blah, blah, back and forth, "Forget it, I'm off," okay? When no transaction occurs, you know they've hit that frontier. So, the Pareto principle says that in a market system they'll move toward the frontier and when they get there, they'll stop. Now, of course, they may have gotten there some way else. They might have gone from X to G over here, and then they would have done a new one, and then they might have gone to here, and so on. And that would just reflect shrewdness in bargaining, or how much people cared lower down their indifference curves, or other idiosyncrasies. But once they wind up anywhere on this indifference curve they're not going to move off of it because now there is no way of improving one person's utility without diminishing the next person's utility, and that is what is called Pareto optimal, okay? So Y is Pareto superior to X, and Z is Pareto optimal, can't be improved upon. It's an optimum in that sense. That's it. That is neoclassical economic theory in a nutshell. Now, I'm going to show you one more diagram that will possibly look intimidating when I first put it up, but all it does is put the preceding diagrams that we've just looked at together. And this is what in the econ literature--it was invented by an economist called Edgeworth, and it's called an Edgeworth box diagram. And let me explain for you diagrammatically, and then if anybody doesn't get it we'll wait up and I'll go through it more slowly. But think about the diagram we just did, okay? Think about A is here in the corner, okay? But basically now we're putting the two previous diagrams together. Maybe I'll just go back so everybody's clear what we're doing. We're putting this diagram, where we have two commodities and one person, and this diagram where we have two people and just utility we're putting it all together, okay, into one big picture. And so you'll see why this is helpful once we get to the end of it. So A is here, and A has indifference curves. This is our first picture, right? So A is tying to go northeast. We have A. We have wine. We have bread. A's indifference curves are the dotted lines, okay? A is trying to get that way, that way, that way, and to keep going all that way, right? And what Edgeworth did--it wasn't that it was a late night or anything that I put this writing upside down-- he said, "Just imagine a mirror going down here." B is coming at A from the other corner, okay? So B is looking at bottles of wine, loaves of bread, and B has now got the solid indifference curves, and B is trying to go this way, right? B is trying to go this way. So A would improve if he went this way, and B would improve if she went this way, okay? So this is an indifference curve for B. Now, it's going southwest instead of northeast just because it's looking in the mirror. B is A looking in the mirror, get it, okay? So B wants to go this way, as I said, southwest; A wants to go northeast. So if you imagine that this were the status quo from right where A has almost all of the bread, right, and B has almost all of the wine, then this shaded area here, this big football, is the Pareto superior set on the previous diagram. Because if you think about it, it's the set that if you move anywhere in the biggest football I've shaded here-- let's say we started here at X where A had almost all the wine and B had almost all of the bread. If they went from X to Y, A would be on a higher indifference curve because A would have gone from here to here, and B coming the other way would also be on a higher indifference curve, and you could draw a new football, right? It would be a smaller football within the bigger football. And then they haggle again. And A says, "Well, what about half a bottle?" B says, "Well, then I want three-quarters of a loaf," blah, blah, back and forth, and they wind up at Z. And you know Z is on the Pareto possibility frontier because every proposal that one makes, the other rejects, right? And so you capture it in an Edgeworth box by having their indifference curves be points of tangency. So at Z, A wants to go this way, but the only way that A can go this way would move B off his or her indifference curve, which is this one coming this way, okay? So the Edgeworth box diagram just puts all of the pictures together. There's nothing conceptually new in it at all, and it only becomes relevant to our purposes because it will enable us to start thinking about distributive questions, which I will get to in a minute. Okay, let me pause because I want to be sure--have I gone through his too quickly? Would anyone like me to walk through it again? If you think you would want me to walk through it again probably half the people in the room do, so don't feel awkward. It's actually a lot simpler than it looks, right? I mean, all you have to do is take the previous, the Pareto principle diagram and imagine it with a mirror and think of B as upside down coming toward A, and then it just all fits together. Very clever. Okay and the one thing I would say, this line here is what on the previous diagram was the Pareto possibility frontier, right? This line is all the points of tangency between--like there is one, there is one, there is one. So you join up all the points at which the indifference curves are at tangents to one another. That is the line where if they get onto this line they won't move off of it voluntarily, right? By definition. So they went from X, to Y, to Z, but perhaps if A had driven a harder bargain early on, they might have gone by a different path and would have wound up at a different point. This is the Pareto possibility frontier which Edgeworth called the contract curve. It's the same thing. It's where they will make a contract to get to. They will agree to transactions that get them onto that curve, but once they're on it they stay there, okay? That's the basic intuition. So market transactions into the Pareto superior zone, and eventually they stop when things are Pareto optimal. Now, let's think about comparing classical and neoclassical utilitarianism. If you think back to Monday's lecture we said with Bentham's utility anything--this isn't a very good forty-five degrees is it? I guess it is. It just depends where you stand. With Bentham's utility we said that anything in this whole area, the first shaded area, maximizes the greatest happiness of the greatest number, right? That was his imperative. The Pareto principle, we now know, singles out this Pareto superior area as unambiguously better because people will go there voluntarily. So everything that is Pareto superior is Bentham superior, right? Everything that's Pareto inferior is Bentham inferior, right? This is uninteresting. It's unambiguously worse whether you're a classical or a neoclassical utilitarian, right? So everything that's Pareto superior is Bentham superior. Everything that's Pareto inferior is Bentham inferior. But now the interesting stuff, which is where all of redistributive politics goes on, and all the battles in politics go on, are in the two Pareto undecidable quadrants, right? This one and this one, about which Pareto says, as a matter of science, we can say nothing, and Bentham disagrees, right? Bentham's principle bisects these Pareto undecidable quadrants because Bentham makes his interpersonal judgments of utility. And so Bentham says, if you go into this area, it's a Bentham improvement, though it's Pareto undecidable, whereas if you go into this area, it's not a Bentham improvement even though it's Pareto undecidable. So Bentham and Sedgwick, and Hume, and the classical utilitarians think we can make interpersonal judgments which allow us to say when it will make sense for the government to tax A, and benefit B, whereas Pareto says there's no way to tell. Again, he says, "People are going to interpret me as saying the government should never redistribute. I'm not saying that, and it's a misuse of my doctrine to say that. All I'm saying is if the government chooses to redistribute, there's not going to be a scientific principle to tell them how." So just to make the point dramatic let's suppose--we're going back to the Edgeworth diagram. Let's suppose this is the status quo. That is, let's suppose B has everything and A has nothing. B has all the wine and all the bread. We know they're on the contract curve because B has nothing that A wants, right? So if you think, again, of Trump and the bag lady, she has nothing that he wants, so the Pareto efficient outcome is for the bag lady to starve. It might not be morally defensible, but it's the Pareto efficient outcome. They're on the contract curve. There is nothing you can do to improve the bag lady's utility that will not diminish Trump's utility, right? Now you could say, "But of course, she's on the verge of starvation. How can that possibly be the case," but notice we've ruled out interpersonal comparisons of utility now. And so proponents of neoclassical utilitarianism have no way to make interpersonal judgments of utility. So the Pareto efficient transaction is no transaction, and the bag lady starves to death. You'll see when we come to read John Rawls later in the semester he says the trouble with utilitarianism is that it doesn't take seriously the differences among persons. The trouble with utilitarianism, says John Rawls in His Theory of Justice, is that is doesn't take seriously the differences among persons. You now know enough to see that actually Rawls' argument is half-right, because the truth is that's the problem with classical utilitarianism. Classical utilitarianism says, "Well, if taking all of your utility and giving it to me increases overall net utility then we should do that, because you improve the greatest happiness of the greatest number." We don't care who has the utility, right? So classical utilitarianism is indeed vulnerable to Rawls' critique, it doesn't take seriously the differences between persons, right? And we saw that when you allow interpersonal judgments of utility and interpersonal comparisons you get the radically redistributive doctrine that Bentham then tries to fend off with his distinction between absolute and practical equality that we talked about last time. But Rawls' point doesn't actually apply. The problem with neoclassical utilitarianism is that it takes the differences between individuals hyper-seriously, so seriously that you would say the bag lady should starve in this example rather than have something redistributed to her by the state, forcibly taken from Trump. So classical utilitarianism ignores the differences among individuals; neoclassical utilitarianism fetishizes the differences among people to an incredible extreme so that proponents of neoclassical utilitarianism, like Richard Posner in his book The Economics of Justice, concedes that it's a problem with your neoclassical utilitarianism that if you have a disabled person who's not capable of working for anybody else there's no reason, or as he puts it, "contributing to anybody else's utility function," there's no reason that that person shouldn't be allowed to die. And Posner says, "Well, that's a problem with utilitarianism." And he throws ups his hands and says, "I don't really know what to do about it," and he moves one. It's a deep problem with neoclassical utilitarianism. But notice from the point of view of the history of ideologies what has happened in this transition from classical to neoclassical utilitarianism. We've gone from a world in which the doctrine of classical utilitarianism was a very radical idea that would legitimate huge redistribution by the state, into a world in which the radical fangs of classical utilitarianism have been ripped out and it is now a doctrine that is very friendly to whatever status quo happens to be generated in a market system. So it ceases to be this radically redistributive doctrine, and in the process imports into utilitarianism a very robust, some would say, hyper-robust doctrine of individual rights, and we'll see how that played out in political theory when we come to look at John Stuart Mills' harm principle next Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
21_Contemporary_Communitarianism_II.txt
|
Prof: Okay, so let's pickup where we left off on Monday, and we were starting to talk in more detail about MacIntyre's argument. And one of the things I want to zero in centrally on today is his account of human psychology, or what used to be called human nature. And that differs importantly from every account of human psychology we've looked at thus far for a number of reasons, and I'm going to spell them out in a little bit more detail than we talked about on Monday. Basically it is an Aristotelian conception of the structure of human psychology, and that has a number of features to it. One is that I mentioned to you, that we think of human beings as teleological creatures. We are purposive creatures. We always want to know what the point of an activity is. Now if you go back to Aristotle, Aristotle held the view that while we are purposive creatures we only reason about the means to achieve our goals, not the goals themselves. For Aristotle, Aristotle's conception of virtue, the virtues, the virtues were given for all time, and he had his list of the virtues which you can find if you go back and read the Nicomachean Ethics. They're things like, courage, honesty, and various others, perhaps debatable, perhaps not. But in any event in Aristotle's scheme of things, what we reason about is the means to achieve the virtues, not the virtues themselves. And I'm going to come back to that because one of the respects in which MacIntyre differs from Aristotle is that he wants to say the virtues-- he wants to at least go this far with the Enlightenment thinkers, he wants to say that what counts as a virtue is, to some extent, up for debate. Just to what extent and how we debate it are things that I'll come back to. But the more important point is that this as an account of the structure of human psychology we need to know what the point is. We have goals. We have purposes. When those purposes are met we're happy and fulfilled. When they're not met we're frustrated and unsatisfied and discontented. Just to give you a somewhat different perspective on this approach to thinking about human psychology I would say that the other influential Aristotelian of our time, besides Alasdair MacIntyre, is the Nobel Prize winning economist Amartya Sen who develops the idea of human potentiality as the core notion. That we're people who have potential, and unless we realize our potential we're going to be in some basic sense incomplete, not fulfilled not happy human beings. And so in Sen's account of justice, which I wish we had time to talk about in this course but we don't, it's analogous that he's an economist, not a philosopher, so he brings the considerations of an economist to this question. But it's the same question of human beings who have a certain kind of potential that has to be realized in order for them to be fulfilled in their teleological capacity, that has to occur. And so with MacIntyre, too, it's this idea of the structure of human psychology being goal-directed, and we'll get into the content later. But so that's the first thing, and then the second was that our behavior is, in a very fundamental way, other-directed. So just, again, by way of comparison think back to the assumptions about other-directedness in previous thinkers who we have considered in this course. When we focused on the utilitarian tradition we saw people as essentially self-referential. Remember Ronald Reagan's famous question in 1984, "Are you better off than you were four years ago?" was a self-referential question. "Are you better off than you were four years ago?" All you have to look at is how you were faring four years ago and how you're faring now, and ask yourself that question. Other people don't enter into it at all. Then when we considered Marxism we came to think about other-directed conceptions of human psychology in that Marx made our welfare critically reliant on what others get. And so he said that your perception of whether you're exploited turns upon what your employer gets. Remember we talked about this in some depth and concluded that Marx was half-right in that people are often other-directed in that their sense of whether what they get is legitimate is dependent on what others get, but what he was wrong about was that people tend to compare themselves to people who are similarly situated in the socioeconomic order. So autoworkers compare themselves maybe to steel workers, but not to auto executives, and they don't make those kinds of global comparisons. Nonetheless there is something to this idea that people's conception of their welfare is connected to what others get, and so relative inequalities matter simply from the perspective of your feeling of how well-off you are. And you can tell yourself any number of stories to make this point. The telling one, quite often, I think is you give a child a glass of orange juice that's half full and he's very happy, and then you give his sister one that's completely full and he immediately becomes unhappy because what matters to him is the relative difference and not what he has in his glass. So that's a notion of other-directed. And of course you can take this notion of other-directed further, still, by linking your utility not just to what other people get, but to actually what other people experience. And this is the concept of interdependent utility, as it gets referred to in the literature. And the examples here are things like a parent taking pleasure in their child's success and feeling pain at their child's lack of success. One of the famous one-liners that parents will resonate with is, "You can only be as happy as your least happy child." So if your kids are unhappy then you are miserable. That's the notion of interdependent utility. Your happiness is conditioned on what others get, but not just in making us a relative judgment with the orange juice kind of a case, but your happiness becomes dependent on what people actually themselves experience, hence interdependent utility. The example of the child's happiness making the parent-- or the child's success making the parent happy is an interdependent positive utility, but of course there are interdependent negative utilities as well as when a sadist's or a rapist's utility goes up as a byproduct of their victim's utility going down. Again, that interdependent negative utility, and it's the mirror image of interdependent positive utility. The Aristotelian conception that MacIntyre embraces takes interdependent utilities a step further even than that, though, in that it's not just that your happiness, or satisfaction, or fulfillment is conditioned upon the experience of others, but it's the experience of others as it relates to you. It's the others' experience of you, if you like. And so a good example of this actually comes from Hegel's Phenomenology where he talks about this dialectic between the master and the slave as being inherently unstable. But it's not just unstable because the slave is going to find it unsatisfactory. It's unstable because the master will find it unsatisfactory. Hegel says, in The Phenomenology, that what we want most fundamentally is recognition from others, but it can't be recognition from others that we don't respect. So recognition of a slave saying, "Yes, master. No, master," is not going to be satisfactory or fulfilling to you. What you want is recognition from an equal or perhaps a superior, but you want recognition from somebody whose recognition can be valuable to you. It's a source of feeling valued by somebody you value. So it takes this idea, as I said, of interdependent utility even a step further because it now goes to the reciprocal relationship between you and the other person, and that is the idea of practices. It's this sort of neo-Hegelian idea in this particular sense that informs MacIntyre's notion of a practice. When I gave you the example on Monday I said, "If I build sheds and write books, it's not enough for me to have the carpenters say, 'Oh, you wrote a good book,' and the academics say, 'That's a nice shed.' I want the carpenter to say that's a good shed, and the people who know about writing books to say, 'This is a good book.'" That's the notion of he's a pitchers' pitcher. You want recognition from people for whom you have esteem at the relevant activity. So this is a different, again, view of the structure of human psychology. All of these different theories are predicated on some account of the structure of human psychology. And indeed, I think that one thing you should be seeing as we move through and examine these different arguments is that every political theory has an account of human psychology, and an account of how the world operates causally. And you need to sometimes dig to get them out, but one of the ways in which you should evaluate them is you should ask yourself, "What is the theory of human psychology driving this? Does it make sense? Is it true to my own experience or not, and if not why not?" That's one question, and then the second is, "What are the assumptions about how the world works causally?" and I'll come back to those in relation to MacIntyre again. But you could go through all the thinkers we've considered and find assumptions and arguments about those two things, and they provide a very good comparative reference point for thinking about how they stack up against one another as we've just seen as went from the utilitarian, to the Marxian, to the different variance of other-referential assumptions about human psychology. So this is the MacIntyre approach that what we really want out of life is esteem from people who we esteem, and that operates through these practices as he describes them. And this is just a summary statement of what I've already told you, where he says, virtues as he describes them are embedded in practices. "It belongs to the concept of a practice as I have outlined it-- and as we're all familiar with it in our actual lives, whether we are painters or physicists or quarterbacks or indeed just lovers of good painting, or first rate experiments or a well-thrown pass-- that its goods can only be achieved by subordinating ourselves within the practice in our relationships to other practitioners. We have to learn to recognize what is due to whom; we have to be prepared to take whatever self-endangering risks are demanded along the way; and we have to listen carefully to what we are told about our own inadequacies and to reply with the same carefulness for the facts." So this is the notion that I translated into the proposition that when you walk into your first Yale College course, you don't look around and say to the people next to you, "How should we run this course? What should we agree too?" On the contrary what you do is say, "What is the practice? What are the norms? What are the rules? How do I do well here? What's expected of me? How will know if I'm doing well?" It might occur to you much later on to say, "That wasn't a very well-run course. It could have been run better if this, that and the other." So you can criticize, and I'll come back to that point later, but you can only criticize on the basis of first having internalized the norms governing the practice. You don't criticize somebody's pitching style in major league baseball until you've learned a lot about pitching. Or maybe you do. You sit there and throw your beer can at the screen, "Err, he's terrible," but it's not the form of criticism he is going to care about, and that's the point, and it's not likely to be the criticism that makes any difference. So rather than the voluntary agent being at the center of things, which is the workmanship story, this anti-Enlightenment story subordinates the individual to the practice, to the group, to the inherited system of norms and values. And if you want a contemporary reference point just, again, to sharpen the comparison here, compare the Cartesian idea which puts the willing agent at the center of the universe with the Ubuntu, which is a kind of African religion which does the contrarian thing. Instead of, "I think therefore I am," it's "I am because we are, and since we are, therefore I am." So it's this idea of subordinating yourself to an inherited system of norms and practices in which you find yourself, not which you created or hypothetically might have created, but rather in which you find yourself. So this is a fundamental difference. It doesn't get more fundamental than this, I think, because as you've seen in every variant of Enlightenment thinking, it's the willing creative agent that is at the ultimate bottom of the heap, and here it's not. It's the subordinated individual, and the community, or the practice, or the tradition to which the person is subordinated that is at the very bottom of the heap. So this is really a fundamental difference. So, features of Aristotelian, or we might call it neo-Aristotelian psychology; one is we're teleological in the sense of being purposive. We have potential to realize and realizing that potential is essential to our happiness. It's not just getting more of what gives you utility. There's some developmental dimension to this. Second, the individual is subordinated to a practice, and that translates into this idea of the community becoming before the individual in politics. But third, and in some ways for MacIntyre's story most important of all, is that on the Aristotelian scheme human nature is malleable. It's plastic. It can be shaped in different ways. And what is so deeply misguided about the Enlightenment project from MacIntyre's point of view is that it does not take account of this fact. It does not take account of the fact that human beings are malleable creatures. Human psychology is shaped by circumstances. And so when Rousseau says at the start of The Social Contract, "I'm going to reason about politics taking men as they are and laws as they might be," that underscores the fact that Rousseau is missing the developmental features of human psychology and reasoning about politics. And it's in missing that that the Enlightenment project went off the rails. So let's talk about Aristotle's scheme. Let me just read this to you and then I will explain it if you haven't understood it. He says in his description, "We thus have a threefold scheme in which human-nature-as- it-happens-to-be (human nature in its untutored [or raw] state) is initially discrepant and discordant with the precepts of ethics and needs to be transformed by the instruction of practical reason and experience into human-nature-as- it-could-be-if-i t-realized-its-telos. Each of the three elements of the scheme-- the conception of untutored human nature, the conception of the precepts of rational ethics, and the conception of human-nature-as- it-could-be-if-i t-realized-its-telos-- requires reference to the other two if its status and function are to be intelligible." So his point here is, if you go back and read Aristotle, he had a view of raw, or untutored, or brute human nature, how we are when we are born. Then he had an account of human beings as they could be if they realized their telos, their purposes, and then he had an account of the rules that get you from the one to the other. The rules, the norms that get you from the one to the other, and those are the rules of ethics. So what are the rules of ethics? The rules of ethics are those things that get you from untutored or raw human nature to the kinds of beings who realize their telos or purposes. And that's why, if you look at ancient philosophers like Aristotle or Plato, the two central features of their political theories that are completely absent when you read, say John Rawls, are a theory of family life and a theory of education. It would have been bizarre beyond imagining for Aristotle or Plato to think you could have a theory of politics that didn't put a huge amount of attention to those two things. Think of how many people in this room have read Plato's Republic, at least a good number of you. That's a book about politics. Think how much of that is about how to rear children and the system of education that has to prevail, an enormous amount of it, probably half the book. There's nothing of that in Rawls's Theory of Justice, or Mill's account in On Liberty. So it's a big difference, and the reason is--and the same is true of Aristotle's ethics and his other writings about politics. Lots of attention to education, to the family, to these things that involve shaping this malleable plastic human nature so that we have a good outcome rather than a bad outcome. And so what bugs MacIntyre is, he's saying, "We want to derive the rules of ethics from human nature as it is, as we find it. But that's never going to work because the rules of ethics are designed to improve behavior, not to simply aggregate behavior. And so anything we design that's just derived from people as they happen to be is going to seem unsatisfying to us, unsatisfactory to us. So to go back to the examples, we discussed affirmative action, the ones he mentions at the beginning of the book, abortion, affirmative action and so on. Yes, we're going to see people have different values and therefore different views about those questions, but they're not going to ultimately be satisfied with just recognizing their differences, because they have inherited a whole language of talking about ethics that presumes these things can be resolved. There is more to ethical disagreement than just accepting the differences among us. And that, at the end of the day, is why the Enlightenment project was a fool's errand. This is just MacIntyre's more eloquent and lucid statement of what I've been saying to you for the last five minutes, that I'm capable of. He says, "Since the moral injunctions were originally at home in a scheme in which their purpose was to correct, improve and educate that human nature, they are clearly not going to be such as could be deduced from true statements about human nature." Just think of Bentham. He says, "We are driven by pleasure-seeking and pain-avoiding. I'm going to derive a whole system from that." MacIntyre is saying, "That's crazy because these moral injunctions were designed to be at odds with human nature." "The injunctions of morality, thus understood, are likely to be ones that human nature...has strong tendencies to disobey. Hence the eighteenth-century moral philosophers engaged in what was an inevitably unsuccessful project; for they did indeed attempt to find a rational basis for their moral beliefs in a particular understanding of human nature, while inheriting a set of moral injunctions on the one hand and a conception of human nature on the other, which had been expressly designed to be discrepant with each other." "They inherited incoherent fragments of a once coherent scheme of thought and action and, since they did not recognize their own peculiar historical and cultural situation, they could not recognize the impossible and quixotic character of their self-appointed task." Whether it's Bentham trying to derive maxims of conduct from the postulative pleasure-seeking and pain-avoidance, or Rousseau trying to come up with a system of government taking men as they are and their laws as they might be, or Kant trying to see what principles human beings as they currently are would affirm from every conceivable standpoint, it's never going to work. The morality that comes out of this kind of reasoning is going to seem, and actually will be, unsatisfying to us. And so if you now go back and look at some of the puzzles and conundrums we came up with when we were considering those different theories, MacIntyre would say, "Well, you shouldn't be surprised. You shouldn't be surprised that objective utilitarianism allows people to take advantage of one another in appalling ways, and you shouldn't be surprised that subject of utilitarianism allows people to ignore one another in a morally appalling way. Because what makes things seem morally appalling to you, treat people with dignity, don't lie, cheat and steal, these are maxims that we know people have a tendency to disobey, and so that's the point of them. They're to make us behave better, and you're not going to be able to derive them in this sense from a completely empirically accurate description of human beings as we find them in the world." So it's a kind of secular doctrine of the fall. In Christian thinking we have the fall, and then we have inadequate human beings, and the possibility of redemption through internalizing and accepting Christ as your savior. This is a secular version of the fall; the secular version of fallen man, untutored, ill-formed human nature, and then we have what we could be if we are properly formed, and then the system of the rules of ethics that are going to get us from the one to the other. And it's the virtues within practices that achieving them, or achieving lives lived in accordance with them that enable us to realize our purposes in a good way, and it's the rules and norms governing those practices that get us from A to B. And the big problem with the Enlightenment philosophically is what I've just said to you, that they try and engage in this quixotic task, but in politics itself it's the creation of an emotivist world. It's a creation of a world that separates means from ends; that values instrumental reasoning, the business school world at large. The world in which we make ourselves agnostic about people's purposes and just try to set ourselves up as a most efficient executor of any purpose. "Have skills, will travel." "I'm a consultant." "For what?" "Well, whatever you need." That's what he finds politically shocking and appalling, but he thinks it's underpinned by this whole philosophical scheme. Now, MacIntyre wrote this during the--this book came out in 1984, which was at the height of the Cold War. One of the points he was making was everybody thinks that the great battle of our time is the confrontation between capitalism and communism, but for MacIntyre these are two sides of the same coin because they are both ways in which this Enlightenment project has been played out in politics and they're both, for him, appalling. I started out saying to you MacIntyre thought of himself as a person on the political left, at least when he was young when he wrote Marxism and Christianity and Against the Self-Images of the Age and those early books, but it's in a different sense. It's not like Robert Bork said at one point in his confirmation hearings, you all are really too young to remember this, but they questioned him about his changing ideological beliefs and at one point he said, "Well, I've always thought if you're not a socialist before the age of 30 you have no heart, and if you are a socialist after the age of 30 you have no head." This isn't MacIntyre. He wants to say the whole Enlightenment venture, whether it's in its capitalist or its socialist form is a variant of this deeply muddled, hopeless quest to develop a science of politics based on human nature as we find it. And so his prescriptions are rather different. He says, "What matters at this stage is the construction of local forms of community within which civility and the intellectual and moral life can be sustained through the new dark ages which are already upon us." He's not a utopian; he's a dystopian, almost. "And if the tradition of the virtues was able to survive the horrors of the last dark ages, we are not entirely without grounds for hope. This time however, the barbarians are not waiting beyond the frontiers; they have already been governing us for quite some time. And it is our lack of consciousness of this that constitutes part of our predicament. We are waiting not for Godot, but for another--doubtless very different--St. Benedict." Now people criticized MacIntyre for a variety of things, but one of the things they criticized him for was that this seems to be sort of throwing up your hands and saying, "The whole world is terrible. We're about to enter the new dark ages, and there's not much we can do about it except hunker down and hope." And he wrote several books, probably the most important of which is a book called, Whose Justice? Which Rationality? in an attempt to respond to these criticisms. I'll talk about some of that in a minute. On the actual politics that flow from this, he never had anything more to say than this, so it is a diagnosis without a cure. I said this was written at the height of the Cold War. If you look at the post-Cold War and you say, "What would MacIntyre say today?" it's not at all clear because he certainly would not be a fan of militant Islam, but it's not clear what he would, you know, he would probably find the spread of globalization as just one more step in this appalling triumph of instrumental reasoning and values. So I think he would be no less despondent, and I think his diagnosis--this is pure speculation, I haven't had this conversation with him-- but my guess is that his diagnosis of militant Islam and Jihadism and all of that would be, he would just say, "Well, it's an inevitable undertow. It's an inevitable reaction by the losers from globalization against the endless spread of Enlightenment ideas." I think in some ways there could be a more interesting MacIntyrian analysis of China in that, in China, we see now the fusion of communist politics and capitalist economics neither of which shows any signs of going anywhere. And so when in 1984 somebody had said, "Well, capitalism and communism are two sides of the same coin," people would have been quite dismissive. When you think about China today, it's quite thought provoking. So that's MacIntyre in a nutshell. Lots of it's quite appealing. This account of human beings, how many people-- never mind the unrealism of the politics, how many find this account of human nature plausible, true to your own experience? At least some. Not that many? We do, you know we want--we've sort of got lots of unrequited love. We want to be liked. We want to be wanted, and we want to be wanted by cool people, or people we think are cool. So that whole side of it seems right, and the notion that political morality should somehow be an improving thing seems right. That you can't just derive it from the-- as Rousseau said it, "If you take all of our individual preferences, and add them up, and get rid of the pluses and minuses that cancel one another out," that's maybe the general will, but MacIntyre would say, "It's going to be the lowest common denominator. It's going to cater to our base instincts and we are malleable, shapeable, improvable creatures and we're not happy unless that's actually happening." So all of that seems right, but on the other hand, as I mentioned to you, MacIntyre makes one huge concession to the way of thinking that at least motivated the Enlightenment, which was that Aristotle was wrong to say we don't reason about ends. Aristotle took ends as given and said we only reason about means. So the big challenge for MacIntyre is to say, "Well, how do you say ends can be put in question as subjects for debate, but not then wind up with emotivism?" How do you structure? How do you put limits, if you like, on debates about ends which recognizes that people do, in fact, you know--a difference between human beings and dogs, let say, or lions is that dogs and lions can't think critically about their own ends. They just think instrumentally about how--their ends really are given. Dog wants to sit on a warm couch. A lion wants to kill a buck, and there's no question about that for a dog or a lion. How you get on a warm couch without being kicked off it by some nasty human, that's a question for a dog, but not whether you want to be on it. Human beings aren't like that. Human beings are capable of thinking critically about purposes and goals. What MacIntyre wants to say is we have the structure of an Aristotelian psyche but not the content. We are not, in the end, dogs and lions in that sense. We can critically appraise our goals, but then we can argue about them, and then we can disagree about them, and we can even have civil wars about them when it really gets down to it. So what do you do with the fact that we reason about goals? That's why he introduces the concept of a practice. He says, "Well, it's true we reason about goals, but we don't pluck them from nowhere. We're born into practices where the goals are defined, where the goal of chess is to get the person checkmate with as many of their pieces on the board as possible. You don't say, "Let's invent a board game." You say, "What are the goals of chess?" And so they're given by practices or communities, and if we think about what is a practice or a community over time, it's a tradition. And so what he wants to say is we're born into practices and communities and we reproduce them into the future simply by living in them. We inherit, we live, and we reproduce into the future, so it's a little reminiscent, or perhaps even more than a little reminiscent of Burke's line about, yes, we are part of a social contract, but it's a contract among those who are dead, those who are living, and those who are yet to be born. And so, yes, we do question. You might play chess for a time and say, "Well, should it really be the goal of chess to beat the person with as many of their pieces on the board as possible? Why? Maybe getting their subordinate pieces on the way to before the checkmate should be a goal of chess." You could have a debate. Does it take more skill to knock off the knights, and the bishops and so on in order and then get the checkmate, or does it take more skill to get the person checkmate with their knights and bishops on the board. And that could be a debate you could have, but it's a contained debate. It's a structured debate. And he wants to say our debates about ends are always like that. Think about debates, getting closer to home for MacIntyre, think about debates within with Catholic Church. The next generation of Catholics doesn't say, "Should we have a church? Should it be organized? Should the bishop of Rome have a special status? Should there be papal infallibility?" No, you're born into a structure that's already there, and then maybe you say, "Well, why is there papal infallibility? It's only a relatively recent creation. It doesn't make sense." So your questioning of ends is always from the vantage point of somebody who's born into an inherited tradition. And it doesn't make sense to criticize it from the perspective of outer space. It doesn't make sense to any of the participants, and it won't ultimately give you any answers that'll be in any sense meaningful to you. And so he sees traditions as the mechanisms through which practices are reproduced over time, and he wants to say, yes, there is argument about ends within traditions, but it's always this kind of structured argument that is shaped by the way in which questions come up within traditions. He says somewhere, "Of course, part of what it means to be a Jew is to argue about what it means to be a Jew, that within the Jewish tradition that is one of the things that people argue about and disagree about, but it's always going to be this bounded, structured disagreement that brings to bear other ideas within the tradition on some particular traditional claim. For those of you who like philosophical jargon Michael Walzer wrote a book along comparable lines where he talked about the idea that the only effective criticism is imminent criticism which is a bit like-- imminent for Walzer, is a bit like internal for MacIntyre. If you want to influence Catholics to change their behavior you're not going to influence them unless you appeal to values they embrace, to norms they embrace, to elements of the tradition they accept and show them how they are undermined by the particular thing that you are criticizing. So internal criticism, imminent criticism, not ex cathedra criticism made from outside a whole system of norms, and values, practices, institutions. You might think you're right, but it'll never have an effect on the people you're trying to influence. So MacIntyre's very much in that spirit. We can argue and reason about ends, but only from inside traditions and practices. Okay, we're out of time, and I will finish up with this and then start talking about democracy on Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
23_Democracy_and_Majority_Rule_II.txt
|
Prof: So today we're going to talk about majority rule and democratic competition. Whenever we think about democracy there is this automatic almost reflexive impulse to associate democracy with majority rule. We saw last time that was one of the things that frightened Madison and his contemporaries that they wanted to limit the power of majority factions as he called them. Limit the power of majority rule. But we think today, reflexively, that majority rule confers legitimacy of some sort on collective decisions. And what our agenda today is to dig into why it is that anybody might think majority rule has some important normative property. Anyone have any suggestion as to why it might? Why should we care? The endless recounts in Florida in 2000 to see who really had the majority, what's the big deal about majority rule? Why should we care? Any takers? Yeah? Student: I guess it's like a somewhat utilitarian flavor to that. Prof: A somewhat what? Student: A utilitarian flavor. Prof: A utilitarian flavor, so that's interesting. There might be a utilitarian justification for majority rule. Just say a little more about what you have in mind. Student: Because if the majority of the society consents to a certain policy or approves of a certain person it's kind of like they're making the judgment that having this person or having this policy would increase their happiness or their satisfaction with society the most. Prof: So maximizing government by majority rule might in some sense maximize utility in the society. That's certainly an interesting hypothesis, and I'll revisit it in this lecture. Any other suggestions about majority rule, no? Going once, twice, gone. All right, we'll revisit this utilitarian thinking because I think it's a very good observation to make because although it's not explicit in much of the discussion of majority rule I think it is implicit and we'll come back to that. The traditional justification for majority rule was that it somehow identified the will of the people. That democracy reflects, embodies, expresses the will of the people, but of course that just puts it one-step further back. What is the will of the people? How do you know the will of the people when you trip over it? One of the canonical formulations of this idea is in Jean-Jacques Rousseau's Social Contract where he says, "There is a great difference between the will of all, what all individuals want, and the general will; the general will studies only the common interest while the will of all studies private interest, and is indeed no more than the sum of individual desires. But if we take away from these same wills, the pluses and minuses which cancel each other out, the balance which remains is the general will." We take away from the individual wills the pluses and minuses which cancel each other out, the balance which remains is the general will. So people have spent hundreds of years trying to figure out whether that makes any sense, whether this notion that you find in Rousseau of pluses and minuses canceling one another out makes any sense. And what sense could it make? How would we know? As I said, how would we know the general will if we fell over it? And it's often been the case that people have tended to associate the general will with the idea of the will of the majority. But going all the way back to the eighteenth century there has been the contrarian impulse first noticed by a philosopher called Condorcet at the end of the eighteenth century, but that has since preoccupied many theorists of democracy, and that is that, well, actually majority rule doesn't even necessarily reflect the will of the majority. What Condorcet noticed was a very simple fact which is suppose you have a society where there are three voters and each voter has a preference over three different policies. So in this example voter number one prefers A to B and B to C. Voter number two prefers C to A and A to B, and number three prefers B to C and C to A. What Condorcet noticed was you get a paradoxical result because you have a majority for A over B. You have a majority for B over C, but then you have a majority for C over A, and that seems like a contradiction because if you dial back through your notes to when we were discussing basic properties of rationality, remember when we were talking about indifference curves and all of that, this seems to violate the principle of transitivity because you have a majority for A over B, you have a majority for B over C, and then you have a majority for C over A. So it seems like our individual preferences can be rational our collective preferences might be irrational in the sense that they contradict themselves. And so several things follow from this, or are thought to follow from this at least. One is that, this Condorcet noticed in the eighteenth century, but a famous, indeed, Nobel Prize winning economist called Kenneth Arrow proved a theorem in 1951 in a little book called Social Choice and Individual Values that this is a perfectly general result. And so if you have modest pluralism of preferences which we were assuming-- remember when we talked about crosscutting cleavages on Monday we were assuming it's important for democracy, diversity of tastes and preferences, then you can always get this result with majority rule. So that has a number of unsettling implications because there's no general will identified by the majority. On the contrary, if things are put in one order you'll get one result, but if things are put in a different order you'll get a different result. So the way parliamentary committees often work you have a motion and then you have amendments to the motion. You always vote on the amendments first and then the final motion last. So another important theorem proved by somebody called Gerald Kramer about twenty years after Arrow basically says, "If you let me determine the order of voting, i.e. what the amendments are and what order we'll vote on them, I can get any outcome. All I have to do is know your preferences and control the order of voting, and I can get a majority to appear to support any outcome." So this suggests that majority rule can be manipulated. Now you might say, "Well, when millions of people are voting nobody's really controlling the agenda in that specific a sense, and nobody has all of that information about everybody's preferences, so people can't manipulate the outcome." That might be true as well, but it also suggests that there shouldn't be much moral authority attaching to the outcome if we know-- yeah, nobody manipulated that C wins, but if things had been done in a different order B might have won, or A might have won in this circumstance, so why should we attach any particular moral authority to the idea that C won? We shouldn't. It's just an arbitrary result. So footnote to this, one important takeaway from Arrow's theorem is you should always be the last person to interview for a job, because if there are cyclical preferences among the interviews over the candidates you want the others to bump each other off and then you come along at the end. So don't say you didn't learn anything useful today. You certainly learned that. They call you for an interview say, "Well, could I come in three weeks? Why don't you interview your other candidates first?" So majority rule. If there is such a thing as the general will, majority rule doesn't seem to identify it. And the public choice literature that came out of economics in the 1950s and 1960s basically converged on that proposition. It said, "There is no such thing as a social welfare function," which is just econ-speak for saying there is no such thing as a general will, or if there is a general will we don't know what decision rule would identify it. And a huge amount of ink has been spilt in trying to figure out what decision rule might identify unambiguously something that we would feel morally comfortable calling the general will. And I don't think anybody has succeeded definitively at that task. Locke, we always come back to Locke in this course. Locke has a somewhat different defense of majority rule. Now you might think that's weird because most people think of Locke as somebody who defended rights. If you go and read about the debates on The Constitution the Lockeans versus the Republicans, those of you who have taken the history course, the Lockeans were the people who wanted to create the Bill of Rights, defend against the majority and so on, but in fact if you go and read Locke what you find is that he's a staunch defender of majority rule. He says, "For when any number of Men have, by the consent of every individual, made a Community, they have thereby made that Community one Body, with a Power to Act as one Body, which is only by the will and determination of the majority." It doesn't say why. "For that which acts any Community, being only the consent of the individuals of it, and it being necessary to that which is one body to move one way; it is necessary the Body should move that way whither the greater force carries it, which is the consent of the majority: or else it is impossible it should act or continue one Body, one Community[.]" So he's saying it's necessary the body should move that way whither the greater force carries it, which is the consent of the majority, or else it's impossible it should continue to act one body or one community, which the consent of every individual that's united into it agreed that it should, and so everyone is bound by that consent to be concluded by the majority. "And therefore we see that in Assemblies impowered to act by positive Laws where no number is set by that positive Law which impowers them, the act of the Majority passes for the act of the whole, and of course determines, as having by the Law of Nature and Reason, the power of the whole." So that's Locke's defense of majority rule. It's not that it identifies some general will. It's really an argument about power, right? He's basically saying, "Look, once you have a community somebody's going to win." It's a little bit like Nozick saying, "Once you have those independents out there somebody's going to force them to join," just a realpolitik argument that the power of the majority's going to determine what the community does. And indeed, if we delve more deeply into other things that Locke says he basically says, "Look, if you don't like what the government does, you can oppose, but if nobody agrees with you, you should (as we discussed earlier in the course), you should expect your reward in the next life. If everybody agrees, or if a majority agrees with you then you can have 1688. You can change the government." So this seems to be an argument about the legitimacy of the majority that is a very hardnosed realistic judgment about politics, not a moral claim that the majority has any particular intrinsic property that gives it the right to govern. It's just saying, "Well, there it is. The majority is going to flex its muscles and if it's not attended to it's going to win." Now I think that we will see Locke gets, actually, a lot closer to the truth of the desirability of majority rule than Rousseau did or the people who were trying to come up with the notion of the general will, or what modern economists would call a social welfare function. And a way you could think about this argument, particularly in light of the observation about the relationship between majority rule and utilitarianism, is that I think the best way to think of what Locke is doing here it's a kind of negative utilitarianism, or at least a cousin of negative utilitarianism. We generally think of negative utilitarianism as the doctrine that we should minimize pain as opposed to positive utilitarianism which is, maximize pleasure. We didn't make that distinction when we talked about Bentham, but it's there in the contemporary literature. So this is a cousin, I think, of negative utilitarianism in the sense that I think that Locke thinks of majority rule and indeed of resistance to power as a way of limiting the possibility of domination. Limiting the possibility of domination. And he's saying you can resist power if power's dominating you, but you're only going to win if you're in the majority, if you have the greater force. But why is majority rule the instrument for limiting the possibility of domination? Why should we think of majority rule as having that propensity? Anyone got a suggestion? Why should we think that majority rule, all things considered, would limit the possibility of domination? This is a hard question. Yeah? Student: Well, it seems it would go back to the crosscutting cleavages that we talked about the other day, that if you're going to be in the majority you don't know if you're going to be in the minority on other things so you would limit the domination that you put out. You wouldn't want to be a domineering presence for fear that there'd be other domineering presences in other spheres. Prof: I think that's exactly right. I think you've hit the nail on the head. He said, "Well, it's related to the crosscutting cleavages and one's not knowing whether or not the policies that get enacted are going to be the polices that you want, or whether you're going to regard them as being imposed on you and dominating you." I'll come back to your point in a minute, but as background to it and to show, I think, exactly why you're right and what turns on what you said, think about another prize-winning economist who has theorized about politics. This is a book published in 1962 by the economist James Buchanan and the political scientist Gordon Tullock for which Buchanan got the Nobel Prize in 1986 and Tullock was not happy. The argument was, "Well, we don't have any Nobel Prize for political science," so that's why they gave it to Buchanan. Although you may know that last year, in a slap at their own discipline, in fact for the first time the Nobel Prize Committee did give the Nobel Prize to a political scientist, a woman by the name of Elinor Ostrom at the University of Indiana, but that was then and this is now. In 1986 Buchanan got the recognition and Tullock was not happy as he widely let it be known. So here's the intuition. It is behind a veil of ignorance. This book interestingly a long time before Rawls wrote, 1962, but the basic idea is behind the veil of ignorance, how would you think about the decision rules that should govern you? How can you reason about that? And they said, "Well, what you have to think about is two things. One is, how likely is it that the society is going to do something you don't like, and what can you do about that?" And related to that, you have to think how much do you care, because some decisions are much more important to you than other decisions. Why does it matter how much you care, because being involved in decision-making takes up time, effort, energy that you could spend doing other things. So if it's some utterly trivial decision you're not going to want to spend a lot of time on it, but if it's a really important decision then you'll be willing to spend time on it in order to make sure your rights are protected. So they make a distinction between, first of all, what they call external costs, and the idea here is that as the number of people in the society goes up, the chances that you're going to have some decision imposed on you that you don't like also goes up because there are all kinds of decisions that people could make. On the other hand there are decision making costs too, and as the number of people in the society goes up the decision making costs increase as well because there are more people to talk to, to negotiate with and so on. And so what you have to think about is the sum of those two things. How important is it to you? How important is it to you to participate in decision-making is going to be, how much do you care about the result, and how much time are you going to have to spend on the result. And what they said in a kind of utilitarian calculus they said, "What you're going to want to do is add them up," so you're going to want to minimize the sum of the external costs and the decision-making costs. You're going to want to minimize that. So when a decision is completely unimportant to you then you won't want to spend a lot of time, but when a decision is really important to you, you will be willing to spend time. And so then they said, "Well, so how should we think about the organization of society?" For questions that people think are really important we should have something like unanimity rule because, after all, unanimity rule is a veto of one. If you have unanimity rule it's like the Pareto principle. Anybody can veto. Everybody's agreement has to be gotten in the limiting case. If you had absolute unanimity rule you can't do anything without that. Whereas for less important decisions, this point here when you're minimizing the sum of external costs and decision-making costs, what would be something less than unanimity rule? It might be a two-thirds rule, and even less important things you might say majority rule, and even less important things than you might say, "Let the bureaucrats decide. It's just not worth my time." So for Buchanan and Tullock, there's no presumption that there's any particular importance attaching to majority rule. On the contrary, we should say for the most important things we should start with unanimity rule and then we can come down the ladder or we can think about steadily declining supermajorities as things become less important to us. And so the argument was that for constitutional questions it should be something very close to unanimity rule, and we should have entrenched or semi-entrenched clauses that are virtually impossible to change. They're telling a story that more or less reflects the structure of the American Constitution, where amending the Constitution does take very hard to get super majorities, but regular legislation takes a lot less. And it's simply this calculation, this self-interested calculation that leads you to often be willing to go with majority rule and there's nothing more to be said about it than that. So now we come to your observation, and your observation is basically the observation that Buchanan and Tullock are wrong; that Buchanan and Tullock are wrong because they confuse unanimity as a state of affairs in the world where we all agree about something with unanimity as a decision rule. And your observation was first made by Brian Barry, who sadly died last year, in a very good book called Political Argument and was developed by Douglas Rae, who teaches here in SOM, in two important articles in the American Political Science Review. And what Barry and Rae pointed out was exactly what this gentleman here pointed out a few minutes ago, which is the whole Buchanan and Tullock story assumes we have agreement at the baseline. The whole Buchanan and Tullock story assumes that everybody's happy with the initial state of affairs. And so then we say, "Well so." We started at that baseline and then we say, "The things that are most important to you from that baseline we'll create unanimity rule and give you, everyone in the room, essentially a veto, but then we'll work down from that." But what Rae and Barry said was, "Well, what if we say that behind this veil of ignorance we don't know whether we're going to like the status quo or not. Maybe we will, maybe we won't, but if you don't want to give any special status to the status quo, then you shouldn't bias decisions to the status quo, because maybe it'll turn out that you don't want the status quo and then you're stuck with something that's impossible to change." And so what Barry and Rae showed was, well, actually if you assume behind a veil of ignorance that you're as likely to be against the status quo as in favor of the status quo then you would choose majority rule or something very close to it. So if the number of people in the society is even you would choose N over two plus one or N over two minus one. If you were wanting to minimize the probably that a decision's going to be imposed upon you, not knowing whether or not it was the decision favored by the status quo. So there's a kind of veil of ignorance logic in the Rae and Barry critique of Buchanan and Tullock which says that the presumption should be in favor of majority rule or something very close to it unless we want to bias the whole system toward the status quo, and we don't actually have any ultimate good reason for doing that, because even if somebody at some constitutional convention preferred to entrench whatever it is, the right the bear arms two hundred years later we might not take the view that we want to entrench that, but now it becomes impossibly hard to change. So if you say behind the veil of ignorance we're going to be as likely to oppose as to support there does seem to be a kind of negative utilitarian logic which says if I want to minimize the likelihood of having decisions I don't like imposed upon me, I would prefer majority rule to the going alternatives. Well, that's all very well, and I think that the Barry and Rae argument is pretty robust. It's certainly stood up for what now; close to half a century. It's regarded as conventional wisdom on this point. Still in all it doesn't tell us a lot about the dynamics of actual politics. How does competitive democracy play out when we're thinking about how actual political systems, actual democratic systems operate? And you'll recall that from Monday's lecture I said to you that Robert Dahl was the most important democratic theorist of the second half of the twentieth century, but I didn't talk about the first half of the twentieth century. And I think the most important democratic theorist of the first half of the twentieth century, who in many ways Dahl built upon, was actually an economist by the name of Joseph Schumpeter. Economists had a lot of influence in democratic theory in the twentieth century. Schumpeter wrote a book called Capitalism, Socialism and Democracy which he published in 1942, eight years before his death. And most of that book is actually completely unremarkable. Most of that book is a long and rather tortured critique of Carl Marx, but the piece of it we're focusing on are those two little chapters called The Classical Theory of Democracy and Another Theory of Democracy. And I should say that those two little chapters may be the most influential writings about democracy in the real world that have come out of the twentieth century. Now it should also be said just as a prefatory matter that the title of the first chapter of those two chapters is misleading, because what he calls the classical theory of democracy is actually a neoclassical theory. It is Rousseau's idea of the general will, which we now know to be chimerical, but you know from Monday's lecture that Rousseau's idea of the general will was actually a neoclassical adaptation, because the ancient Greek idea was ruling and being ruled in turn. So Schumpeter's critique of what he calls the classical theory of democracy we should remember as actually a neoclassical eighteenth-century idea. But he starts from the proposition that the critique of the idea of the general will is valid. There is no such thing. There is no social welfare function, as an economist would put it. But he, Schumpeter, says, "Let's think about democracy in a fundamentally different way. Let's define it as follows. 'The democratic method is that institutional arrangement for arriving at political decisions in which individuals acquire the power to decide by means of a competitive struggle for the people's vote.'" And now he's going to develop this idea in that second little chapter with an analogy to the market. He's going to think about democracy as shopping. Schumpeter says, "Look, think about democracy, think about the polity as an analog of the economy." What do we have in the economy, and what do we have in the polity? Well, one thing we have in the economy is consumers, and what is the political analog of consumers? Any guesses? What's the political analog of consumers? Student: Voters. Professor Ian Shapiro: Yeah, voters. Another thing we have in the economy is firms. What's the political analog of firms? Anybody? You have the mic, guess. Student: Various candidates for positions? Professor Ian Shapiro: Yeah, parties. That's good enough. Firms make profits. What is the political analog of profits? Student: Whoever wins the election? Professor Ian Shapiro: Close. What do they get? How do they win? Student: They get votes. Professor Ian Shapiro: Exactly. So firms want profits, parties want votes. Then firms produce products. What is produced in the polity? Student: Various party platforms. Professor Ian Shapiro: Platforms, yeah and legislation. And so when we think of the doctrine of consumer sovereignty in economics what would the political analog of it be? This won't leap out at you immediately, but okay. It's the idea of democratic legitimacy; it's the equivalent. When we say this consumer sovereignty in markets there's democratic legitimacy in the polity. And so we have this basic parallel between the economic system where firms are competing, firms are competing for profits, and they engage in a competitive struggle for the consumer's dollar, to paraphrase Schumpeter, and parties are competing for votes and they engage in a competitive struggle for the people's vote. And democracy is not about participation, or deliberation, or all of the things that people try to identify with, it's essentially, as I said, it's about shopping. You shop for politicians and policies in just the same way as you shop for iPads and Maseratis or whatever it is you buy. This is how we should think about democracy. Hugely influential, hugely influential. And what disciplines the elites, what disciplines the elites that creates the democratic legitimacy, or maybe I should have put democratic accountability there, is the fact that the voter can kick the bums out. Gordon Brown's worrying about this for the next few weeks. The British voters can heave him out. That is what disciplines the political elites. That is what prevents them ultimately from exercising domination. And so we think about a competitive system driven by politicians who are competing for votes. Turns out there's a big tradition in American political science of studying this that goes back to, again, another economist, kind of depressing all these economists but there it is. In 1929 Hotelling, who wrote a paper trying to explain why is it that if you look at a town, any town, you'll find Target and you'll find-- what's another? You'll find Shaw's and Stop & Shop right next to each other on Main Street. And Downs developed this into a political argument. Basically he said imagine you've got a continuum here from left to right, so the people with ideological--this is ideology. People on the left are here. People on the right are here. Well, we should think the population is more or less normally distributed, so most people are in the middle and some people are at the two extremes. Well, if you have two political parties where are they going to head for? They're going to head for the median voter because that's where most of the votes are. So you might have differences of opinion within the parties, I'll come back to that in a minute, but basically--oops, where are we? We're getting too far ahead of ourselves here. Parties are going to head for the median voter because that's where the votes are. When they asked whoever it was, "Why did you rob banks?" Because that's where the money is. Politicians are going to go for the median voter. That's where the votes are. Now, of course, they could be wrong. They're going to guess. So for example, 1964 Goldwater running for president thinks that the median voter is way over here, but he's wrong so he loses. Then 1980 Ronald Regan basically runs on exactly the same platform that Goldwater had lost on in 1964 and everybody says, "This crazy right-wing nut's going to be creamed. We saw that in 1964," but either because he knew something, or because he was lucky or some combination, it turned out that between 1964 and 1980 the median voter had moved and Jimmy Carter was wrong about where the median voter was. So generally speaking, other things equal, the parties would converge to the median voter, or at least where they believed the median voter is and the people who get it right will win. Now that seems to have the implication that particularly as polling gets better and better, so you don't make the kind of mistakes the Democrats made in 1980 or the Republicans made in 1964, the parties are going to start offering exactly the same policies. And indeed, if you look at the--I don't know how much attention you folks are paying to the British election, but basically they're offering the same policies. They're both saying the others are going to lie to you about what they're going to do, but we're going to keep the National Health Service. We're not going to cut this. We're not going to cut that. We're not going to do this to taxes. Because they've done all the polling they know what the median voter wants, and so they're basically offering the same policies. I'll come back to what that means for political competition in a minute. But if they're competing, if they're basically offering the same policies, what are they competing over? What are they putting in front of electorate? If they're basically both doing the same thing, they're going to compete over things like character assassination. They're going to say, "He's a liar. Vote for me because he's a liar. He says he won't cut the health service but he will." The other one will say, "He says we won't raise taxes, but he will. He's dishonorable. He was involved in this, that, and the other scandal." That's what they're going to do. They're going to compete over personalities, right? Now one way you could imagine something that would change that is if there were other variables that kept these parties apart. So, for example, we have primaries now in the U.S., and of course the primary voters on the Democratic Party are over here probably normally distributed, and the primary voters in the Republican Party are here. If you have to win the primary first then you're going to get pulled down here, and the Republicans are going to get pulled over there, and so what happens is if you have some other force that pulls the parties apart, then you might get competition over policy. Some people say, "Oh, it's bad to have primaries because if you have primaries the activists get control of the parties and extremists in both parties, you know, the unions control what the Democratic Party does and the far right controls what the Republican Party does." It's true, but the flipside of that is that when the general election comes, you actually have competition over policies because the parties have been pulled apart by the fact that they had to win these primaries first. Now there are other bad sides of primaries, but part of the point here is you need to see that what it is that parties compete over can change. So when we put in the general election the parties have been pulled somewhat apart. So if you have something in the structure of political parties, it could be primaries, it could be a strong control of the party selection process by the leadership or something like that, then you will get competition over policy. Now some people would say that's better because people get a clear choice as opposed to just character assassination and that sort of thing. It's actually better for them to compete over policies. Notice, though, if you do have strong parties that are kept ideologically apart, and you have competition over policy, you can get results like in Britain, the British Railways have been nationalized and denationalized three times in the twentieth century, because Labor comes in and nationalizes them and then the Tories come in and denationalizes them. So you get policy alternation and that could be good or bad, but it's a different thing to compete over. So you can have competition over personalities, you can have competition over policies, and of course you can have, as we do in our system, you can have competition over pork. If you have a system in which you have individual constituencies like we do in the U.S., you have congressional constituencies and then you have states, each representative is really looking for, not the national median voter, but the median voter in their district or in their state. That's who they're serving. And so there's a lot of literature about whether it's better to have proportional representation where you have basically one national constituency, or whether you should have a system in which politicians compete with one another as to who's going to bring more pork back to our district. So I mention these things only so that you're aware that once we talk about political competition we haven't settled what it is that parties compete over. But Schumpeter didn't address these because Schumpeter ultimately didn't care. He didn't care, really, whether they were competing over personalities, policies or pork. The point was that they were competing, and they wanted to throw one another out. This is the system, to hearken back to Monday's lecture, in which ambition really counteracts ambition, whereas separation of powers and all that, for the reasons Dahl gave, doesn't really work. There's no mechanism, but this creates a mechanism for ambition to counteract ambition. So Schumpeter and Dahl are much more sympathetic to the idea of pluralism and crosscutting cleavages, and a competitive struggle for the people's vote than they are to institutional checks and balances to discipline elites. Of course there are problems with Schumpeter. I'll just mention them briefly because we're running short of time, and you can pursue them in section. But one is, you're going to have two parties, or maybe three parties, or four parties; it's oligopolistic competition. It's not a very thoroughgoing form of competition, so that's one thing to think about. Secondly, this whole thing doesn't address the role of money in politics. We'll see this in sharp relief in the state of Connecticut in the next eight months where the likely Republican nominee for Chris Dodd's Senate seat has already announced she's going to spend fifty million dollars of her own money on her campaign. And so Attorney General Richard Blumenthal who's the likely Democrat has to raise a lot of money. So it's not necessarily competitive struggle for the people's vote, but rather struggle for money and that has implications which you should think about. Notice third that this Schumpeterian story completely devalues participation. After all, it buys into the Buchanan and Tullock definition of the problem where we saw participation was defined as a cost. If you have to spend time participating in politics it's time you could be spending driving your Maserati and you'd rather be doing that. That's the assumption. How good an assumption is that? Maybe participation is inherently valuable then you have to think about that. And then finally it really is a minimal conception of democracy. Now some people have operationalized Schumpeter to say, and this is true in the comparative politics literature about new democracies, that we can't call a system a democracy until the government has twice lost an election and given up power. This is sometimes called the Schumpeter two turnover test. Famous Harvard political scientist who recently died called Samuel Huntington came up with the two turnover test. We can't call something a democracy unless there's been this turnover at least twice. In one way that's a stiff test. Japan didn't meet it until very recently. The U.S. didn't meet it until 1840. India didn't meet it until recently. South Africa which people crow about as a new democracy has yet to meet it. We don't know what would happen if the ANC lost an election. Would they give up power, maybe, maybe not? So in one respect it's a robust test, but people criticize it as being minimal saying really there's more to democracy than that, and we'll come back to that question next week, but what I want you to take away is I think the enduring insight of the Schumpeterian model that really starts with Locke's linking. Locke was very prescient; he saw three centuries ahead. Linking of majority rule to this idea of resisting domination, that non-domination, this idea of resisting domination, however you institutionalize it and operationalize it and all of that, that is the basic animating ideal of democracy. We'll pick up from there on Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
2_Introductory_Lecture.txt
|
Prof: Okay, so let's begin. I asked you to bear two questions in mind in reading the Eichmann, and I'm going to get to those in a minute. But before we get to that, who was Adolf Eichmann? Who was Adolf Eichmann? Student: He was the expert on Jewish immigration and relocation. Prof: But what was his job? What was he there to do? Student: Well, in the beginning he was sent to talk to the Jewish elder council and try to negotiate the moving of thousands of Jewish people as quickly as possible from different areas throughout Europe. Prof: Correct. What else was he supposed to do? Why don't you give somebody else a chance? That was certainly part of what he was doing. What else was he supposed to do? Was he somebody who was involved in designing the final solution, as it was called in the Third Reich? Was he a policy maker? People are shaking their heads. No he wasn't. He was basically an implementer, right? He was obeying orders and his job was to make the trains run on time, literally speaking; the trains that were going to take Jews to concentration camps. It was a complicated logistical feat and he was in charge of it. His job was to make sure that it was an efficiently run operation. As I said, at the end of the war he was actually captured by the allied forces, but they didn't realize what a significant figure he was. He was subsequently released, inadvertently made his way to Latin America where he lived undercover until the Mossad found out where he was, and in 1960 they kidnapped him, brought him to Israel where he was tried in the trial that you read about in this book for today. Found guilty of crimes against the Jewish people and crimes against humanity and put to death. Okay, what kind of a person was he? What do you think motivated him? When you read this what kind of a person, what stood out about him? Anybody? Student: It seems throughout the text that he was driven by this idea of self-advancement, the need for... Prof: He wanted to get ahead. Student: Yeah. Professor Ian Shapiro: He wanted to impress his superiors. Student: Yeah, first and foremost, I believe. Professor Ian Shapiro: He wanted to get an A, right, pretty much? That's one thing that strikes you about him, yeah. What else struck you about him? Student: The thoughtlessness that he demonstrated throughout the campaign. Professor Ian Shapiro: The what? Sorry. Student: The thoughtlessness. Professor Ian Shapiro: Thoughtlessness. Student: Just kind of the blind obedience to the Fuhrer's word. Professor Ian Shapiro: He was not a reflective man. Student: Not at all. Professor Ian Shapiro: He's not somebody who would question authority, right? Did you get the impression he hated Jews? Student: No. Professor Ian Shapiro: No. What makes you say that? Student: I mean, well first during his testimony he claimed that he didn't hate Jews. He actually was a supporter of the Zionist movement through some of the readings that he did before 1939. And throughout even the years between 1939 and 1945 when he was carrying out all these actions that were detrimental to the Jewish people, he found himself having Jewish friends, working with the Jewish people trying to find solutions, but as long as that didn't contradict the orders of his superiors. Professor Ian Shapiro: I think you've got him exactly right. He was not somebody motivated by a visceral anti-Semitism. Indeed, you get the impression reading this book that if his superiors had said to him, "Instead of shipping people to concentration camps we want you to ship munitions around the Third Reich to resupply the army," or "We want you to ship car parts," he would have done it with the same zeal and desire to improve his standing with his superiors. He wouldn't have been any different about it. He didn't seem to be like somebody who saw this as a great opportunity to do something he really believed in. He was simply trying to get an A, trying to get ahead, trying to impress his superiors, do a good and efficient job at whatever it was that he was asked to do. If that's an accurate description about him it's quite chilling, right? And I think this brings us to the first question. What is it that makes you uncomfortable about this man? One of the things I asked you to think about, what makes you uncomfortable about Eichmann? Yeah? Student: The fact that he was just basically not thinking about what he was doing, so someone else had total control of what he was going to do. Professor Ian Shapiro: So his un-reflectiveness is what makes you uncomfortable. Student: Right, because that means really anyone can be like that. It kind of makes you reflect on yourself and just on human nature. Professor Ian Shapiro: Okay, so you feel he should have been questioning the authorities that were giving him the orders to do this? Student: Yeah, that'd be nice. Professor Ian Shapiro: Okay, that's definitely one thing that makes people uncomfortable. What else? Anybody else? Yeah? Student: It was just that his un-reflectiveness transferred to the fact that-- well, he wanted to do his job well, but the fact was that he wasn't just transporting munitions. These were people's lives, supposedly people he easily could have been friends with. Prof: Right, he wasn't thinking about the larger purpose. So it's connected to the un-reflectiveness, although we'll see that one of the themes in this course that political theorists write about is the whole idea of detaching means from ends of thinking about how to do things efficiently without reference to what it is that we are doing, but it makes us uncomfortable. What else makes you uncomfortable about this man? What makes your skin crawl when you think about him? Anything else? Yeah? Get the microphone. Student: He was gloating. Like when he was in Argentina and he was doing his interviews he was gloating about the million of people that had died because of him because of his work. Professor Ian Shapiro: He was pleased that he had done a good job. Student: Yeah. Professor Ian Shapiro: It wasn't as if he was doing this with any sense of reluctance, right? Even though you don't get the sense that this man had a visceral anti-Semitism about him he certainly was pleased with the fact that his superiors liked what he was doing, and that he was being commended for it, and he was getting ahead. Anything else that makes you uncomfortable about him? Yeah? Student: He brings up the term idealist and he sort of attaches a sense of honor and loyalty to the Third Reich that we find uncomfortable because of what it's trying to accomplish. Professor Ian Shapiro: Loyalty to his superiors. Student: Right. Professor Ian Shapiro: That he thought it was important to obey his superiors. Okay, somebody over there was going to say something. Yeah, over there. Student: Well, it's not something specific about his actions, but I'm uncomfortable with the fact that when he is considered normal by all the tests, and too, when he was wasn't thinking at all he actually thought he was thinking and he fooled a lot of people into thinking that he's smart and competent. So it makes me wonder if that stupidity is just something within all of us. Professor Ian Shapiro: Yeah, I think that's right as well. So here you have a man who is completely unreflective about what he's doing. He's obeying orders from above. He's not breaking the law, right, at all. He's implementing the law as it's enacted in the Third Reich and he's going about something that is absolutely monstrous with calm efficiency and good humor. You could imagine this guy going home at the end of the week, playing with children, being friends with the neighbors, going to a barbecue, and going back to work the next day, right? One of Arendt's phrases to capture this is that--one of the things that seems so chilling about this guy is that he doesn't seem like a monster. He seems like the man next door. And the phrase she uses is, "The banality of evil." He doesn't seem like somebody who's got some unusual traits. So I think to turn the screw a little bit further one of the things that makes people uncomfortable with a man like this is it sort of makes you wonder how would you have behaved in his situation. You hope you would have not behaved as he did, but he doesn't seem that unusual. He seems completely unremarkable man, fairly typical guy doing what he's, you know, putting one foot in front of the other in the situation in which he finds himself and getting on with his life. And that, I think, is captured by her phrase, "The banality of evil." Okay, let's now just put Eichmann to one side for a minute and think about the second question I asked you to reflect upon while reading this. What two things make you most uncomfortable about the events surrounding Eichmann's apprehension, trial and execution? First of all, why did Israel do what it did? Why do you think they went and kidnapped him? I mean, the normal process if you're trying to get hold of a criminal in other countries, you go and you apply for extradition, and you make the case, and you go to court, and the person is arrested, and eventually they're brought to trial. So why didn't they do that? Why did they do what they did? Anybody even know or what to take a guess? Why did they do that? Does anyone want to try that? Yeah? Student: Well, it seems to imply that they might have been concerned or might have suspected that other countries wouldn't be sympathetic to their wish to try Eichmann. They might have worried that he'd garnered a certain amount of international support or defense in the country he was living in. Professor Ian Shapiro: I think that's right. If they had tried to extradite him it wouldn't have worked. As soon as he knew that there were many other former Nazis in Argentina and elsewhere in Latin America, the minute he realized that they knew where he was and who he was he would have gone underground and vanished. And there would have been so much sand in the wheels of the extradition system that they made a judgment, which is probably a valid judgment that if they were going to get him at all, this was the way they were going to get him. So that's why they did it. Pretty much that's why they did it. Okay, so given that that's why they did it, what if anything makes you uncomfortable about what they did? Over here we need a mic, yeah. Student: In one of the last two chapters, I believe, they're discussing the reasons for judgment against him on all the counts, and one of them she's discussing the four main points that they attempt to prove through the prosecution. And in some of them they discuss the term in dubio contra reum, which is, I guess, when in doubt act against the defendant. And that was kind of just when we don't have necessarily all the evidence because a lot of it is speculative, we will act in a way that will... Professor Ian Shapiro: Okay, so there was not a lot of attention to the rules of evidence. Student: Yeah. Professor Ian Shapiro: Right? Okay, so that's one thing. There were a lot of what we would think of as procedural irregularities in this trial. Yeah? Student: I was uncomfortable with the speed of the execution. It happened extremely soon after his appeal and plea for mercy; within a couple of hours. Professor Ian Shapiro: Right, it was the way executions are done in China today, not the way executions are done in the United States today, right? It was very rapid. And why were you uncomfortable with that? Student: I think there's no doubt that what he did was wrong, but you at least have to go through the procedures just to ensure that justice is actually served, so that even if it was correct that he deserved the death penalty. I think the actual protocol should have been followed and that he should have gotten his time to make the appeal. Professor Ian Shapiro: Okay, anything else make anybody uncomfortable about what was done? Over here, yeah? Student: It mentioned in the beginning how showy the trial was, and one part that I thought really stuck with me was when the author mentioned that Israel's laws itself prohibited Jewish... Professor Ian Shapiro: Israel what, sorry? Student: Israeli law prohibited Jewish and non-Jewish marriage, so how the two societies, although one is certainly worse than the other, I think, on a moral level there's still no perfect society, and that really was kind of disturbing, I think. Professor Ian Shapiro: So it was something of a show trial. Student: Yeah. Professor Ian Shapiro: There was more than one agenda here. One agenda was justice, bringing him to justice, but clearly there was an agenda of revenge, and there was an agenda of catharsis for many of the people who testified, who came there. It was very important for them personally to get up and speak about what had happened to them for which he was held responsible. So there were multiple agendas going on here, only one of which was what we normally think of as what's going on in a criminal prosecution. Anything else make anybody uncomfortable? Yeah, over here. Student: The author mentioned that Israel wouldn't have gone to the trouble of kidnapping him if he wasn't automatically going to be found guilty. It was just a foregone conclusion from the beginning and that's just really setting such a bad precedent for trials in general. So even though it wasn't clear he was guilty the fact that it was just-- they wouldn't have even troubled going through it if they thought there was any possibility of him being found innocent or not guilty. Professor Ian Shapiro: Okay, so the outcome was never in doubt. Student: Uh-huh. Professor Ian Shapiro: Okay, and that makes you nervous because? Hang on. Student: Because, well, his case is pretty black and white, but that's just setting a precedent for the future. And if in a case where he's so clearly guilty you can't use real evidence, and you can't actually go through the steps, the meaningful steps of a trial to find him guilty, then it seems like in the future other trials where it's not so black and white may be made more into show trials as well. Like he could clearly have been found guilty in a meaningful way with real evidence and they just kind of didn't try to do that. Prof: Okay, anything else make anyone uncomfortable? Yeah, over here. Student: Also the lack of evidence that was eligible for procurement on the side of the defense and the lack of defense support really stuck out to me. Professor Ian Shapiro: They didn't have what we call discovery. There was a lot of procedural irregularities here. They didn't have access to the evidence. There were a lot of what, you know, when you all go off to the law school and you learn criminal procedure a few years from now, there was a lot of what would be called reversible error in this trial; that is on appeal would have run into trouble, procedural problems. So there were procedural problems in the trial, what else? Over there, yeah? Student: It makes me uncomfortable that Israel was even claiming jurisdiction in the case considering the crime did not happen on Israeli soil and that they literally had to send their secret service to kidnap him to bring him to Israeli soil. Prof: Okay, so you are troubled that they thought they had jurisdiction in this matter. Student: Right. Professor Ian Shapiro: Because the crime didn't happen on Israeli soil. Any other reason why one might be troubled that they claimed jurisdiction? Student: It felt like it was the country kind of speaking for a people or speaking for humanity in a way that was just unsettling to me. Professor Ian Shapiro: Okay, that they arrogated to themselves the right to prosecute in the name of the Jewish people and in the name of humanity, crimes against humanity they convicted him of, you might say. Who appointed them to do that? I think there's a lot of truth to what you say there, but there are a number of other aspects to it as well. Why else might we be troubled that they thought they had jurisdiction to do this? What does jurisdiction mean anyway? What is jurisdiction? Yeah? Student: I think it's equally troubling that they thought that they had to do this. I mean, they talked a lot about the way that former Nazis are being tried in Germany and how lenient their punishments were. I think that that is kind of the need of the German people to kind of, I don't know, kind of like, twist history in that way to kind of take blame away from these people who obviously did a lot of things wrong. In a way force Israel to have to kind of take this very like drastic sort of extra legal action. Prof: Okay, but so I think you all make a number of valid points, but somebody would say, a devil's advocate would say, "Well, look, nobody's seriously alleging that he was incorrectly convicted. Nobody's seriously alleging that he didn't do this." We've already said by assumption that they were working on the premise that if they didn't prosecute him it wasn't going to happen at all. So what's there to be that troubled about? Is there a counter argument to that? Yeah, over there. Student: Next time it might not be so clear. Another country might use this as a precedent for themselves. Prof: Okay, so there's a precedent issue, and what are the precedent's we're worried about? Student: The precedent of claiming jurisdiction over any number of crimes that occur elsewhere, the precedent of basically kidnapping in order to have a legal system, the precedent of show trials for serious crimes. Prof: And it's potentially a huge issue. I don't know if you noticed Israeli politician Livni canceled a trip to Britain a few weeks ago because they discovered that some prosecutor in Britain was going to have her arrested for war crimes in Gaza. She had been foreign minister during the Gaza invasion a year ago. And the British government was very embarrassed and they were trying to fix it so that that couldn't be done again. But Pinochet couldn't travel to many places because he was eventually going to be arrested. There are some countries in which there are conversations about whether they might try and arrest Donald Rumsfeld if he travels to them for what went on at Abu Ghraib. So if countries start setting themselves up and saying we have the right to prosecute war criminals, how is it going to be regulated? Who are they going to be answerable to? It might look okay in this case because nobody seriously contends that Eichmann didn't do these things or that they were justifiable, but there's this precedent setting issue that you have to worry about. Anything else? Anything I haven't mentioned that you might find troubling? I mean, might somebody not say, "Look, it's not just that it wasn't on Israeli soil, this guy's being prosecuted by a legal system which didn't even exist when he committed his crimes, and furthermore he was obeying a legal system that did exist," right? He was in Germany in the 1940s obeying the laws of the Third Reich. He wasn't breaking any laws then, and now he's being tried in a legal system in a country that didn't even exist then for violating other laws. Isn't this just victor's justice? So what do we say about that? Did they do the wrong thing? A lot of people, I think, would grant much of what's been said here and still be troubled by the notion that they shouldn't have done it because he would never have been brought to justice. Okay, now let's take a step back from this conversation and bring in what we were doing in the first half of our discussion. Don't you think there's a tension now between the two halves of our discussion? Because when we were talking about what made us uncomfortable about Eichmann it was his abdicating his moral responsibility to question the prevailing law, to question the legitimacy of what he was being asked to do, his un-reflectiveness, his lack of interest in the overall goals of the enterprise that he was part of. This completely unreflective person doing what he was told to do, obeying orders, trying to please his superiors and get ahead, that's what made us uncomfortable. But now when we talk about what Israel did in 1960 it seems like what makes you uncomfortable there is the fact that they did do all of those things. They sat down and they said, "Well, yeah, there's international law, and there's extradition, but hey, you know what? It's not going to happen. And if we want to get the morally correct outcome here we have to take it upon ourselves not to accept the existing rules of the game, not to accept the existing order, but to stand up for what's right." So how is it that we on the one hand are uncomfortable with Eichmann for failing to do that, but here we're uncomfortable when these Israeli commandos and prosecutors do exactly that? Is that a real contradiction or am I missing something? Anybody want to pick up that? What do you think? Student: So I do think it's a contradiction and very worth examining, but one sort of interesting aspect of it is just that I think part of what we object to about the trial is that it claimed to be following a certain legal order, and that it sort of associated this working outside the system with a system. Professor Ian Shapiro: So what is that legal order? Student: With the Israeli legal system, I mean, with the judicial court system of the nation of Israel. Professor Ian Shapiro: I take your point, but it's not very plausible. It's not the best defense because after all, as has come up in this conversation, even by the standards of existing Israeli law was not a great conducted trial. Student: Right, but what I'm saying is that this trial, which clearly broke a lot of what we would consider the good standards for trial sort of tainted somehow the idea of the Israeli judicial system. I'm not saying this would have been better, but some of our objections wouldn't necessarily apply if it had been really obviously, just like a political assassination, because they didn't know what else to do with him. I mean, not that that would be acceptable either necessarily, but just the relation of this to the courts gets in there somehow too. Professor Ian Shapiro: Okay, so that might speak to part of it, but anything else about this tension between on the one hand we're worried that Eichmann abdicated his moral autonomy and responsibility. On the other hand we're uncomfortable that they asserted theirs. Yeah? Student: Well, I'm not really quite sure the tension completely exists if we're to assume that one system of government or one system of laws is, like, legitimate and valid like the Israeli trial system. Like having a system that lets you have appeals, that doesn't kidnap people illegally versus a system that kills millions of people who are innocent of doing anything wrong, and kills them simply for who they are. So in Eichmann's case it seems to me that it is legitimate to challenge him, to challenge the system, and to be shocked or disappointed when he doesn't and just goes along with it. On the other hand, to bring someone like that to justice it is a bit troubling that when you have a system that does work, or that we widely believe to be a judicial system that does work and is sort of widely accepted to be a good system, setting a precedent there could be potentially a bad thing. I'm not quite sure there's a tension if in one case, challenging the system is good, and in the other maybe challenging the system has more of a gray area of issues. Professor Ian Shapiro: Well, I think you've hit the nail on the head. I think that the tension goes away if we say, "Well, the reason we are troubled by Eichmann's failure to ask any questions about what he was being asked to do within the legal order of the Third Reich is that it was an illegitimate illegal order." It was as Arendt says somewhere in there, it was a criminal regime, and it was his failure to recognize that that makes us uncomfortable. On the other hand, when we start talking about the system of international law, it has its imperfections, but we're much more ambivalent about saying people can ignore it with impunity, because we don't see it as an illegitimate order in the same way that we did. So I think you're exactly right when you point out that this is not a real tension. It's an apparent tension, appearing to be a tension on the fact that most of us don't have any qualms in thinking about the Third Reich as an illegitimate regime and that that's why we're uncomfortable with his actions. Whereas, when you do have at least a fledgling legitimate international legal order we become uncomfortable with people who flout that. So I think that's right, but it places the central question of this course into sharp relief because our question is, "Well, what is it that makes a regime legitimate or illegitimate?" If we say you have an obligation to resist an illegitimate regime and to obey a legitimate one, that just pushes the question one step further back. What is it that makes a regime legitimate? And that is the question that we're going to organize our discussion around in the coming weeks and months. We're basically going to explore five answers to that question, or five types of answers, or five classes of answers to that question, and they represent the main ways in which political legitimacy has been thought about in the western tradition over the past several hundred years. The first one that we're going to look at is the utilitarian tradition. And the utilitarian tradition, which we're going to trace back to Jeremy Bentham, who we'll start talking about on Friday, says that a regime is legitimate to the extent that it maximizes the greatest happiness of the greatest number of its citizens. Now, there are huge arguments within the utilitarian tradition as to what counts as happiness, how you measure it, what you do if promoting the happiness of some comes at the expense of the happiness of others. All of those things get argued about endlessly in the utilitarian tradition, and we will explore some of those arguments as we go along. But the basic idea is that good regimes, legitimate regimes maximize the utility or the happiness of people who are subject to them. And then the second set of answers, the second tradition we're going to look at is the Marxist tradition which comes into its own in the nineteenth century, and that basically identifies the legitimacy of the state with the presence or absence of exploitation. Legitimate governments are those that try to prevent exploitation, and illegitimate governments are those that facilitate exploitation. Again, just as in the utilitarian tradition there are huge disagreements about how to identify utility and measure it, in the Marxian tradition, from the beginning, there have been huge disagreements about what constitutes exploitation, how you would know it when you see it, what would be involved in eradicating it, and how governments might or might not be able to do that. But at the end of the day it is the presence of exploitation that is the marker of an illegitimate order, and it's the possibility of escaping exploitation that creates the possibility of a legitimate order. And so the Marxian answer to the question of legitimacy revolves around this idea of exploitation. The third tradition we're going to look at is what's commonly known as the social contract tradition, and the social contract tradition founds the legitimacy of a political order in the notion of consent, agreement. Again, you'll find just as with all these other traditions a lot of disagreement about what constitutes agreement, what constitutes consent. Is it some people who actually agreed at some point in the past that should bind us? The American founders agreed on certain things and therefore we should be bound by what they agreed upon, or is it what any rational person would agree to? Is it a hypothetical idea of consent? Does the consent have to be active? Do people actually have to engage in consent or can it be tacit? Can you demonstrate your consent simply by not leaving? All of these issues will get a lot of attention when we look at the social contract tradition, but for all of their internal differences and disagreements the social contract tradition comes down to the proposition that the legitimacy of the state is rooted in the consent of the governed. The fourth tradition we're going to consider is what I call in the syllabus the anti-Enlightenment tradition, reacting against those first three Enlightenment traditions, and the anti-Enlightenment tradition appeals to tradition. It's the tradition. The legitimacy of the state resides in, is discovered in the inherited norms, practices and traditions of a society. Again, we will find people struggling with and arguing over what constitutes a tradition, how you will know a tradition when you trip over it, what happens when people disagree about what the tradition implies. All those questions are on the table once we start appealing to tradition, but at the end of the day it is the appeal to tradition itself that becomes important; that when you criticize a government you have to appeal to something that is imminent in and accepted in the tradition. The rights of Englishmen, let's say, as it was appealed to in the eighteenth century. And then there's a tradition about what the rights of Englishmen are thought to have been going all the way back to Magna Carta. That becomes the basis for arguing about the legitimacy of the present actions of the government. The fifth tradition we're going to look at is the democratic tradition, and the democratic tradition says that the legitimacy of the state depends upon the extent to which it obeys something which we'll call the principle of affected interest. That is whether those people whose interests are affected by its actions get to control what it does. Close cousin of consent you might say, and indeed the democratic tradition is a close cousin of the social contract tradition, but at the end of the day it's a different tradition and often works through, but not always, the idea of majority rule. And so, again, in the democratic tradition there are huge controversies about how we operationalize and apply this idea of affected interest as the basis of politics, but the underlying notion is that if you can resolve those issues, trying to make the government the agent of people who are affected by what it does, is the name of the game in promoting legitimacy and undermining illegitimacy of the state. So those are the five traditions we're going to explore. We're going to come back again and again, not only to the Eichmann problem, but to other practical examples that throw this question of the legitimacy of the state into sharp relief. But at the end of the day we're always trying to understand what the basis for political legitimacy actually is. We have a couple of minutes for questions if anyone would like to... Yeah, mic over here. Student: What about the notion that might makes right and that the most powerful stays the most legitimate? Prof: Okay, he says, "What about the notion that might makes right and it's the most powerful states that get to dictate what legitimacy is?" I mean, you could say that. After all, if you think about the example we've been considering today. At the end of the Second World War we had the Nuremberg Trials run by an American Supreme Court justice. They went to Germany. They did a lot of what Israel did in 1960, actually. They tried these Germans by laws that hadn't existed at the time for obeying the laws that had existed at the time. Many of them were executed. Many of them were in prison. You could say this is just might makes right. If somebody else had won the war there would have been a different outcome. So this is a doctrine which we will consider in the course of our discussion, actually, on utilitarianism. It is, for those of you who want the jargon, the doctrine of legal positivism. Anyone want to take a crack at what legal positivism means? Anybody know? No reason you should know. Legal positivism was the doctrine that said exactly what this gentleman here has been saying; that what makes something law is the ability to get people to obey it. If you go back into Medieval Europe we'll find a distinction between natural law and positive law. Natural law is sometimes identified with the will of God, sometimes with timeless universal moral principles, some higher law by reference to which we judge what actual legal systems are doing. So when we say the Third Reich was a criminal regime, an illegitimate regime, we're ultimately appealing to some kind of higher law, some natural law idea as to what constitutes a legitimate regime, right? And we're going to talk a lot more about natural law later. Well, legal positivism was the doctrine that there's no such thing as natural law. Jeremy Bentham, who we're going to start talking about on Friday, says, "Natural law, natural rights it's dangerous nonsense, nonsense on stilts. There's no such thing as natural law." And he is considered as one of the intellectual sources for this doctrine of legal positivism, which comes into its own in the nineteenth century. If you say, well, there is no higher law then the question is, well, then what is law based on? One answer is power. Bentham wanted to say it should be based on science. As I mentioned to you last time, the Enlightenment is all about faith in science. So we will consider a version of that argument when we come to talk about utilitarianism, but it's a good point to make particularly in the context of the Eichmann problem. Now, what we're going to do, I realize now I misspoke when I said we were going to talk about Bentham on Friday, didn't I? Why didn't some astute person say, "No, you're not." We're actually going to talk about Locke. Now, you might say, "Well, why are you going to talk about Locke?" And the answer is that Locke actually performs two different functions in this course that you will see. They come together later on. You'll see how they come together later in the course. But Locke, on the one hand, we're using as somebody to provide us with a way of understanding the Enlightenment move in politics. And as I said to you, the Enlightenment move involves a twin commitment to the ideas of science as the basis for social organization and freedom as the highest good, and that that Enlightenment pair of assumptions is going to inform the first three traditions we look at, right? The utilitarian tradition, the Marxian tradition and the social contract tradition. Locke will also be looked at later as a representative of the social contract tradition. But what we're going to do in Friday's lecture is think about the broad themes of the Enlightenment, for which the Locke reading is there and it's going to provide the background, and we will not actually get to utilitarianism until next Wednesday. One thing I would say to you about the Locke going in--on the one hand it's a tremendously famous book. It's been in print continuously for more than 300 years. It's been translated into all the world's languages--all of the world's major languages and many of the world's minor languages. Locke's most important political writing for sure. It's not a fast read. It's a political pamphlet. It was written for a very particular political purpose that I'll tell you something about, but it's not a fast read in the sense that what you read for today is a fast read, and some of it's going to go right by you and that's okay. Because you might think, "Well, not only is it not a fast read, but he's going to do a lot of this in one lecture. It seems like a lot to do in fifty minutes," but the truth is we're going to come back to Locke again and again through this course, and what you're going to discover is that the ghost of Locke is hovering behind us all the way through. In many ways modern democratic theory is a footnote to Locke, and so you'll see we'll consider Locke as a father of the Enlightenment, as a representative of the social contract tradition, and then Lockean considerations will come in to the Marxist tradition, to the democratic tradition, to the social contract tradition as well. So some of what you read for Friday will go by you. Don't worry about it because we're going to revisit Locke again, and again, and again as we go though the course. Okay, so we will see you on Friday and then that's when we'll begin talking about John Locke.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
3_Natural_Law_Roots_of_the_Social_Contract_Tradition.txt
|
Prof: We're going to start today by talking about the Enlightenment, and I want to offer one prefatory caution about any way of dividing up the history of ideas, any way of periodizing, if you like, the history of ideas, which is that there's no single right way to do that, and indeed any way you do it obscures in some ways important things. So, for example, sometimes people divide up the history of Western political thought into the ancients and the moderns. And the ancients are thought to have certain characteristic preoccupations that change around sometime around the sixteenth century with Machiavelli, or in the seventeenth century with some of the folks we're going to be talking about today, and so we get this picture presented that there's a fundamental difference between ancients and moderns. On the other hand, another way of dividing up the history of the tradition is between naturalists and anti-naturalists. So naturalists are people who think that understanding nature and understanding human nature is the key to political theorizing, whereas anti-naturalists look for something else, whether it's God's law or transcendental platonic forms or something like that. And so we can have a division of the tradition between naturalists who generally trace all the way back to Aristotle, and anti-naturalists who trace all the way back to Plato. It's not that the distinction between naturalists and anti-naturalists is better or more accurate than the distinction between ancients and moderns; it's just a different kind of distinction that highlights different features of these thinkers for different purposes. So I just say that as a caution because now we're focusing on the Enlightenment as a characteristic move in Western political thinking that really starts in the seventeenth century and comes into its own in the eighteenth century, but I don't want you to reify that idea. These Enlightenment thinkers we're talking about do in fact have important points of continuity with medieval and ancient thinkers, some of which will come up in our discussions. Nonetheless, I think it's useful to focus on the Enlightenment as a distinctive term in Western political thinking, and that's going to structure our discussion going forward. And the first three traditions we're going to consider in this course, namely utilitarianism, Marxism, and the social contract are all variants of Enlightenment thinking. But before we get into the nitty-gritty of those traditions, I want us today to take a step back and think more generally about what the Enlightenment was, what this Enlightenment move is that I'm pointing to as setting the outer philosophical boundaries of these first three traditions that we're going to be considering starting next Wednesday. And as I said in my introductory lecture, the Enlightenment really involves a twin commitment as far as politics is concerned, and that is first and foremost a commitment to science as the basis for theorizing about politics. A commitment to science rather than a commitment to tradition, or religion, or revelation, or anything else. Rather the idea was that science is going to provide the right answers for thinking about the correct political organization of society. And secondly, the core political value for Enlightenment thinkers is this notion of individual freedom to be operationalized or realized through a doctrine of the rights of the individual. We'll see shortly that one of the distinctive moves of the seventeenth-century writers about politics is that they stopped talking so much about natural law and start instead to talk about natural rights, the rights of the individual to realize their purposes through politics. And so the Enlightenment, as I said, revolves around this twin commitment to the idea of importance of science as the basis for theorizing, and the importance of individual freedoms realized through a doctrine of individual rights. And we're going to use John Locke as a window into this early Enlightenment thinking, separately initially from his role as one of the early social contract theorists that we'll be dealing with in a few weeks. But first we're going to focus on this idea of science. Now, let's think about what science is, what it's all about. I have three diagnostic questions here, and the reason I put them up will become plain in a second. It has to do with the fact that the early Enlightenment theorists, the seventeenth-century theorists, thought that the hallmark of genuine knowledge, the hallmark of science, was certainty. You might all have come across in one philosophy course or another, the famous Cartesian idea, Descartes' idea. Anyone read Descartes who can tell us what he's most famous for? Student: > Prof: Do what? Student: > Prof: The Cogito did you say? Student: > Prof: Yeah, Descartes. What was his idea? Student: > Prof: Systematic doubt. You're dead right. Why was he interested in doubting things, anybody? This is a bonus question because Descartes is not on the syllabus, but why was he interested in trying to doubt things? What was he looking for? Yeah? Student: He was looking for what you can truly be certain about. Prof: Exactly. He was looking for absolute certainty; that the hallmark of genuine knowledge is certainty. So he was asking himself, "What is it that we can have certainty about?" And as you probably know, Descartes had his own answer to that question. What was it? Student: "I think, therefore I am." Prof: Correct. I'm a Yalie, therefore I am, right? Yeah, "I think, therefore I am." And Descartes thought this was a particular kind of proposition because the very act of trying to doubt it affirmed it. You couldn't doubt it. And so that was what the early Enlightenment theorists were looking for. What is it that makes knowledge certain, puts it beyond doubt? And so I've got up here three propositions, and we could say the sum of the interior angles of a triangle equals 180 degrees. How many people think that that proposition can be known with certainty? Yeah, a lot, I mean, how would you try and doubt it? You could measure one triangle, measure the next triangle, measure a third. After you had measured 5,604 triangles you'd start to say, "Hum, maybe there's a theorem here. It's not as if the next triangle I find is going to turn out not to add up to 180. It's just not going to happen, right? We know, and that's what we've come to refer to in modern philosophical thinking as a priori knowledge. It follows from the nature of the definitions, the nature of the terms that the propositional force can't be doubted. So often we say a bachelor is an unmarried man. You're not going to go and start looking at bachelor after bachelor to see if you can find a married bachelor. There's no such thing as a married bachelor. So those propositions we tend to think of today as analytic propositions; they follow analytically from the definitions of the terms at issue, or in truths of mathematics. It seems there's a theorem that tells you it must be the case that the sum of the interior angles of a triangle add up to 180 degrees. When we come to what I've numbered two here, how many people think that that can be known with certainty? Some of you do. Anyone not so sure? Why are you unsure? Where is the microphone? What's the source of your doubt? Yeah, okay. Student: Well, it seems like in terms of geology we weren't able to penetrate what was underneath the earth for a long period of time, and the knowledge of tectonic plates even was a new or more modern discovery, I guess. Prof: Right, so I think that's exactly right. The current best empirical understanding of earthquakes is that it's the movement of tectonic plates, but science might advance. There might be changes in geology, which might lead us eventually to learn, "Well, yeah, some earthquakes result from tectonic plate movement, but other earthquakes might result from something else," or we will learn that something moves the tectonic plates that we don't currently know about. This is an ongoing process of empirical discovery, right? It's not a proposition that follows from the nature of the terms, and these we call empirical propositions in modern philosophy of science. Sometimes they're called, if you like the fancy Latin terminology, a posteriori as opposed to a priori. They're not analytical propositions, right? They're a result of observation and trying to figure out what the causes are behind those phenomena. So generally speaking they don't have the same kind of force as analytic propositions. What about this third one? Consent is the basis for political legitimacy. Anyone think that can be known with certainty? Nobody. I think that's right. Most people would say, "Well, that's a moral or normative judgment of some kind. Maybe people agree with it, maybe they don't, but certainly it's not a scientific proposition, at least not obviously a scientific proposition. Indeed, even if we took it just in a descriptive sense, not to mean consent should be the basis of political legitimacy, but as a descriptive matter about regimes, some people would say, "Well, maybe regimes are based on consent, but maybe some regimes are based on other things. Maybe they're based on claims to divine authority. Maybe they're based on utilitarianism." We're going to talk about that next week. So it seems even as a descriptive matter, never mind as a normative matter, this is not a scientific proposition in the way that causal statements about the world are scientific propositions, and certainly not in the way that analytical statements about the world can have scientific certainty. So we could spend more time on this, and there are indeed nuances in modern conceptions of science that I haven't gotten to that would paint a more complex picture of the differences among these three propositions, but what I want to draw your attention to now is that in the early Enlightenment, seventeenth-century thinkers about the nature of science, Descartes and his contemporaries, thought very differently about science, indeed. In the early Enlightenment they would have agreed that the sum of interior angles of a triangle equals 180 degrees is a proposition that can be known with certainty, but interestingly they would have put this claim here in the same category. They would have put this claim about consent in the same category with propositions about mathematics, propositions behind which there's a theorem, as we just said when we were talking about triangles, and they would have relegated empirical and causal claims to an inferior status. Now, you might think that is pretty weird. At least prima facie that seems pretty weird. And in order to get your mind into the world in which they lived you have to suspend disbelief for a moment about your concept of science and try to get your mind around early Enlightenment conceptions of science, which involved thinking very differently. And the reason is that early Enlightenment thinkers wouldn't say that this is certain because of the meanings of the words, but rather it's certain because there's an act of will behind it. Now, this may sound even weirder than anything I've said so far, but consider this passage here written by Thomas Hobbes, who is writing in the middle of the seventeenth century, in a minor piece that he wrote, Six Lessons for the Professors of Mathematics. This is not one of Hobbes' major works, but I think it's one of the most succinct descriptions I've ever seen of the early Enlightenment conception of science. He says, "Of arts," (which for him is a general term to capture knowledge) he says, some are demonstrable, others are indemonstrable; and demonstrable are those the construction of the subject whereof is in the power of the artist himself, who, in his demonstration, does no more but to deduce the consequences of his own operation. The reason whereof is this, that the science of every subject is derived from a precognition of the causes, generation, and construction of the same; and consequently where the causes are known, there is a place for demonstration, but not where the causes are to seek for. Geometry therefore is demonstrable, for the lines and figures from which we reason are drawn and described by ourselves (we make the triangle); and civil philosophy is demonstrable, because we make the commonwealth ourselves. Okay, so this is the key to this what seems like a very weird conception of political philosophy as equivalent to mathematics. "[C]ivil philosophy is demonstrable, because we make the commonwealth ourselves. But because of natural bodies we know not the construction, but seek it from the effects, there lies no demonstration of what the causes be we seek for, but only what they may be." "Only what they may be." There's not going to be certainty about earthquakes, about the causes of earthquakes. We can make probabilistic judgments. We can make empirical claims, but at the end of the day those claims are fallible, they're corrigible, they might have to be revised in the face of future knowledge, and science is not going to ever reach to the level of certainty with propositions of that sort. And so this is why, going back to this, I did this ordering of these propositions on a Hobbesian view. These two are equivalent not because of anything about theorems or analytics, but rather because this will-centeredness. They're the product of human conscious action. We make the triangle and we make the commonwealth, and so we have privileged access into what goes into that making, but we don't make the planet. God made the planet and we can only observe the effects of earthquakes and then try and guess about their causes, right? So when we come to John Locke, who is at one with Hobbes on this point about knowledge, the term I want to use to capture this early Enlightenment conception of science is the workmanship ideal, right? Rather than a priori or analytic knowledge we're going to talk about knowledge in terms of this idea of workmanship, maker's knowledge. And it's important to realize that in its ultimate foundations, this is actually a theological proposition. Both for Hobbes, we're going to leave Hobbes behind now because you didn't read him and we're going to focus on Locke, but as I said, they have the same view on this particular question. So God has intimate knowledge of the universe because he created it, okay? And that is this idea that making is a source of knowledge, right? So God knows the causes of earthquakes because he made the earth. We don't because we didn't make the earth, but we can have God-like knowledge of what we create because God gave us the power to create things. So this is, in the first instance, a theological argument, okay? So we are like miniature gods with respect to what we create. We have the same kind of maker's knowledge over what we create as God has over what he created. Now, there are some constraints on our kinds of knowing because we are also made by God, and I'm going to get to that in a minute. But what he did for human beings that he didn't do for any other aspect of creation on Locke's telling, is that he gave us this creative capacity. He gave us the capacity to behave like miniature gods in the world, and I'll come back to that. But I did want to alert you to the fact that for Locke this was actually something of a tormenting idea. We'll hear a lot more about Locke later, but one of the most important things you need to remember about Locke is that he was a Thomist. He was a believing Christian theologian throughout his life. And there was a huge debate that had gone on actually for two centuries before Locke wrote among theologians, and this was the puzzle that they were worried about. The question was, "Can God change natural law?" If you said no, that would suggest that God is not omnipotent, but if you said yes, that would suggest that natural law is not a system of timeless universals, because if God could change natural law maybe he'll choose to change it tomorrow. And as we were talking about on Wednesday, remember when somebody asked me about legal positivism, I said that that was a doctrine that had rejected the idea of natural law. The idea of natural law was that there are some timeless universals that we can appeal to, to judge actual political institutions. So we can appeal to natural law in the context of the Eichmann problem we were discussing to say that Nazi Germany was an evil regime, right? We appeal to this higher natural law. Well, Locke had been concerned about this theological problem with natural law that if you say it's a timeless universal that seems to undermine the idea of God's omnipotence because God can't be an all-powerful figure. But if on the other hand you say, well, God can change natural law then that undermines its possible universality. And Locke struggled with this. If you become experts on the seventeenth century and you go back and read his essays on the law of nature written in the 1660s you'll see him really torturing himself as to how to resolve this. He never really resolved it, but in the end he came down on what we're going to call the command theory, the workmanship theory, the well-based theory that-- he said, "We have to say that God is omnipotent and let the chips fall where they may for the timelessness of natural law." He was never entirely comfortable with it, but he couldn't let go of that proposition for reasons I've already alluded to; that he thought something couldn't have the force of a law without being the product of a will. And so it's God's will that's the basis of natural law in God's case, and God's knowledge of his creation is traced back to this idea of the workmanship ideal, maker's knowledge. So God has maker's knowledge of his creation. We're going to see two important other features, additional features of this workmanship ideal that I'm just going to mention now and then I'll come back to. One is that it translates over also into normative considerations. That is to say, not only does God have workman's knowledge, creator's knowledge of what he creates, but he also owns what he creates. He has rights over his creation. We are God's property because he created us. The world is God's property. The universe is God's property. You own what you make. God made everything, so ultimately God owns everything. And just as we can behave as miniature gods in understanding our creation, we can also behave as miniature gods in owning what we make. So we will see later this affects his theory of property and his theory of the state, because we're going to have rights over the state which we create. We're going to get to all of that later, okay? But so it's a unified theory, this workmanship ideal, that goes both to the question of knowledge and to the question of ownership, rights, entitlements, everything is traced back to this workmanship ideal. The second point that I'm just going to mention now and will come up later, it'll come up most dramatically when we come to consider Marx, is that what gave Locke's theory its internal coherence was that it was a theological argument. None of this really makes any sense unless you start from this proposition that God created the universe and has maker's knowledge and maker's authority over the universe. Everything flows from that. As I said, in Locke's understanding he gave us this unique capacity to create as well, but we're answerable to him in ways that'll come up later. But none of this makes sense without the theological assumptions behind it. One of the big projects of the Enlightenment which is going to concern us one way or another throughout the course is what happens if you try to secularize the workmanship ideal. That is to say, you'll see in the labor theory of value that Marx embraces, which is at the core of his political theory, and modern social contract theorists like Nozick and Rawls and many others, what you're going to find is that people want to hold onto this basic structure of thinking. People find this workmanship ideal intuitively very appealing, but they're going to try and detach it from its theological moorings because they either don't believe the theological argument for one reason or another, or they find it problematic, or they want to convince people that this idea that making creates ownership is powerful and important regardless of your religious convictions. And so one of the big challenges and projects of the Enlightenment is going to turn out to be: how, if at all, can we retain the structure of the workmanship ideal while shedding its theological foundations? But that's getting ahead of ourselves. So what I want the takeaway point for today to be is that this is the early Enlightenment conception of science. It's a workmanship ideal that appeals to certainty, which was the Cartesian preoccupation and the Hobbesian preoccupation, but it does it in a different way. It doesn't look for what we today think of as analytic propositions, rather it looks for propositions that can be known with certainty because we introspect into our own will and understand with certainty what we have created, okay? And now let's start to transition to talking about individual rights, and we do this by moving from God's knowledge of his creation to God's ownership of his creation. Just to sum up this workmanship ideal, Locke says, The state of nature has a law of nature to govern it, which obliges every one: and reason, which is that law, teaches all mankind, who will but consult it, that being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions (I'll come back to that): for men being all the workmanship of one omnipotent, and infinitely wise maker; all the servants of one sovereign master, sent into the world by his order, and about his business; they are his property, whose workmanship they are, made to last during his, not during one another's pleasure: and being furnished with faculties, sharing all in one community of nature, there cannot be supposed any such subordination among us, that may authorize us to destroy one another, as if we were made for one another's uses, as the inferior ranks and creatures are for ours. So this idea should give you a sense in which Locke's theory of individual rights are basic and are rooted in the workmanship ideal. We are God's creations. We are his property and that means, as he says here, we can't be one another's property. We can't own one another. You might say, "Well, don't parents make their children? Don't parents own their children?" and indeed, Sir Robert Filmer, who Locke was arguing against in the first treatise, took exactly the view that parents own their children. But Filmer had a very different conception of the theological foundations of the universe. Filmer, who was a defender of absolutism, Filmer said God gave the world to Adam and his heirs through a system of primogenitor or inheritance. And so there were a lot of folks running around in seventeenth-century England, for example, saying, "Well, if you pay me I can prove that you're a closer descendent of Adam than the next guy." This was the notion that the lineage to Adam was important, and indeed that the Kings and Queens of Europe got their political authority because they were the most direct living descendants of Adam and Eve. So it was this idea God gave the world to Adam, and he to his children, and children and children. And in absolutist thought they took very serious the idea that parents own their children and can indeed sell them into slavery or worse because they're their property. Locke says, "No, God makes the child and uses the parents as his instrument." The parents simply act out an urge that's implanted in them, but they can't fashion the intricacies of the child. And, of course, they can't put a soul in the child, so the child is God's creation and we are all God's creation, unlike the view developed by Filmer. And so when we think about the doctrine of individual rights the first thing to see is that this workmanship idea gives us a fundamental different status than anything that had existed before in Western political thinking. Because we are created by God, we are all equal in the sight of God, and we have the capacity to function as miniature gods because of this capacity to create things that we have, right? Secondly we're all equal before God. We're equal, as he says here, Whether we consider natural reason, which tells us that men being once born have a right to their preservation and consequently to meat and drink, and other such things as nature affords their subsistence: or revelation which gives us an account of those grants God made of the world to Adam, and to Noah, and his sons, it is very clear that God, as king David says, has given the earth to the children of men; given it to mankind in common. So we are all equally created by God. We all have the same rights to the common as everybody else and there's no sense in which it's Adam and his heirs that have-- there's no sense that Adam and his heirs have some kind of priority. Thirdly, it's very important for Locke, and a very radical move to say that there is no authoritative earthly interpreter of the scriptures. Anyone guess why I might say that? There may be ambiguities in what the scriptures mean, and one person says it means X and one person says it means Y. Okay, we have a volunteer over there. Why is it important for Locke to say that nobody on earth can settle those disagreements? Student: Because that person would supposedly be closer to God being an interpreter of the word of God. Prof: Right. That person would set themselves up as being closer to the word of God being the interpreter of the word of God. "[E]very man ought sincerely to inquire into himself by meditation, study, search, and his own endeavors attain the knowledge of, cannot be looked upon as a peculiar..." I'm sorry. I'm misreading. "[Those things that] every man ought sincerely to inquire into himself, and by meditation, study, search, and his own endeavours attain knowledge of, cannot be looked upon as the peculiar possession of any sort of men." I think a word has been dropped there, but the meaning of it is that authoritative knowledge can't be the peculiar possession of any particular person. "Princes, indeed, are superior to men in power, but in nature equal. Neither the right nor the art of ruling does necessary carry along with it the certain knowledge of other things and least of all of true religion. For if it were so, how could it come to pass that the lords of the earth should differ so vastly as they do in religious matters?" So political leaders differ with one another about religious matters, and that itself tells you that you can't rely on anybody to settle them because nobody has more privileged access than anybody else to God's knowledge. So it's the beginning of what we will later come to refer to as a Lutheran idea of the relationship between man and God, but the idea is that everyone must read the scriptures for themselves, and if God wants to speak to you he will speak through the scriptures, and if somebody else reads them differently, nobody on earth can settle that disagreement. Nobody has the right to settle that disagreement. Everybody must settle it for themself. And that's going to turn out to be hugely important in politics because if people start to believe that the ruler of the society is violating natural law, there's nobody who can say they don't have the right to hold that belief. What it means they can do in practice is another matter that we'll get to later. So it's going to provide the basis for the right to resist the authority of the state, the right to resist sovereign authority. When we think about the Eichmann problem again, then you are ordered to do something, to send people to a concentration camp, and you read the scriptures, and you say, "Well, my reading of the Bible tells me that this is wrong," there is no earthly authority who has the right to contradict you. Tremendously important philosophical move with huge political consequences. Of course there was another consideration here. Who knows--it was not just the disagreements among the kings and queens of Europe, but who else might Locke have been thinking about in the 1680s not wanting to give authority to for interpreting the scriptures? Some history major, what was going on in England in the 1670s and '80s, anybody know? Who were they worried about? Not just the different kings competing and authorities--yeah? Student: The Papacy?. Prof: The Pope, yeah. Okay, we don't need the mic. The Pope, of course. You will see when you come to read Locke's letter on toleration that he has a very wide view of toleration, but he doesn't think that Catholics should be tolerated. We'll get to why later. But the reason here would be that the Pope sets himself up as the authoritative interpreter of the scripture and you can't have that, okay? You have to have something like what we would call today a disestablished church. And indeed that takes us to a fourth source of individual rights in Locke's thinking, namely that every individual is sovereign. The care of souls cannot belong to the civil magistrate, because his power consists only in outward force; but true and saving religion consists in the inward persuasion of the mind, without which nothing can be acceptable to God. And such is the nature of the understanding, that it cannot be compelled to the belief of anything by outward force. [...] And upon this ground, I affirm that the magistrate's power extends not to the establishing of any articles of faith, or forms of worship, by the force of his laws. For laws are of no force at all without penalties, and penalties in this case are absolutely impertinent, because they're not proper to convince the mind. The state can control your behavior, but it can't make you believe anything. And this is the source of his objection to any earthly authority being in the position to dictate what religion requires, whether it's the king or the Pope, right? There is no authoritative interpretation of the scriptures in this world. Tremendously important move. So just to summarize; we have this workmanship idea that informs the early Enlightenment conception of science. It's preoccupied with certainty, and it's rooted in this creationist theory of knowledge that we have workman's knowledge of our workmanship. It also translates over into the theory of rights because just as we have workman's knowledge of our creation we also have a workman's authority over his creation. We own what we make just as we know what we make. Secondly we own the right to the world's-- what he's going to later call "the waste of God," what's given to mankind in common, just as little or as much as everybody else. There is nobody who has a prior claim on the property out there, the animals, the land; everything that's in God's creation. We all have the right to use it and nobody has the right to stop anybody else from using it, right? We didn't make it so we don't own it. God made it and he gave it to us in common. This is in contradiction of Filmer's view that he gave it to Adam, and Adam's heirs have inherited both the goods and property in the world and the political authority in the world directly from Adam. So it's this basic idea that we have common rights to the creation that God has put before us. Third, we have equal access to the word of God, and that's really important because the word of God, or natural law is what binds all human beings. We're all his creation and natural law is the expression of his will. We have to obey it, but what if we don't agree about what it means? What if we don't agree about what it means? That's what the second treatise is about. We're going to talk about that in a few weeks, right? But the important point here is the sovereign doesn't have any right to declare what natural law means, or the magistrate to compel us to believe the received interpretation. And finally each individual is sovereign over himself because "true and saving religion," as he puts it, consists of "inward persuasion of the mind." You can't be made to believe things and it's your authentic belief that something is the right answer, which is essential to its being the right answer. And this is going to supply the basis for the right to resist, the right to resist the authority of the state that is ultimately the right on which Locke's political theory is constructed. So this workmanship ideal informs the theory of knowledge and the theory of rights, and we're going to see that it doesn't go away. People try to secularize it, and we're going to explore the ways in which that they do that at considerable length. But they try to modify it as well, but it doesn't go away. And you'll see that one of the reasons it doesn't go away is that problematic as it might be, and there are many problems with it, very few people, very few people in this room, I will bet, are going to ever want to give it up entirely. Okay, so we'll start next Wednesday with classical utilitarianism, the first of our Enlightenment traditions. See you then.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
8_Limits_of_the_Neoclassical_Synthesis.txt
|
Prof: So in Monday's lecture we let something slip by all of us without comment that we probably should not have let slip by, and it's something I want to focus on today. That is when we spelled out Mill's harm principle I said, "Is everybody sure they know what it means?" Never mind whether you agree with it or not, but whether you're sure that you understand what he is saying, and we all agreed that at least it was clear. In fact, though, it's ambiguous, and the highlighted phrase is the ambiguity I want to focus on. At the end of that long paragraph I read you when Mill is saying, "If you think something's bad for somebody or that they shouldn't do it," remember I gave you an example of going to law school when I don't think it's good for you. Mill says, "Those may be good reasons for remonstrating with him, or reasoning with him, or persuading him, or entreating him, but not for compelling him or visiting him with any evil in case he decides to do otherwise. To justify that (I need to justify forcing somebody to do something or prohibiting somebody from doing something) the conduct from which it is desired to deter him must be calculated to produce evil to someone else." "Must be calculated to produce evil from someone else." And there are two issues raised by that phrase that I really want us to focus on with laser-like intensity in today's lecture. The one is signaled by the passive voice in it, "must be calculated." Well, that obviously puts on the table, who does the calculating, right? And secondly, what does it mean, calculated? It could mean calculated as in intended. "I didn't intend to harm you. There was no calculation to harm you when I got paralytically drunk behind the wheel of a car, and then I don't even remember putting the keys in the wheel and driving off. I didn't intend to harm you," right? Or is it a third party calculation? Again, then we get to the question, by whom? Who's going to decide? Who's going to do the calculating? So what does calculated actually mean, and who is it that's going to do the calculating? Those are the things we're going to zero in on today. And I thought the best way into it was to consider some examples. And one I left you with on Monday was, what about prostitution? This is what is sometimes referred to as a so-called victimless crime. After all, the transaction between a prostitute and a client is a Pareto superior transaction, right? It's a voluntary market transaction. They wouldn't do it if it didn't make both people better off. So should we just say from Mill's point of view, "There's no harm here; it truly is a victimless crime and we shouldn't make things criminal if they are victimless?" How many people think that makes sense? About half. Who thinks it doesn't make sense? Nobody? Yeah. Okay, so why? Why doesn't it make sense, somebody who thinks it doesn't make sense? Over here? Why were you thinking it didn't make sense? Student: Even though it's victimless I think harming yourself has some sort of implications, I think. Prof: So it's the prostitute harming herself? Student: Sort of. I think there are some moral implications in there that must be dealt with. Prof: Like what? What kinds of implications? Student: Besides just harming--okay, let me take that back, harming yourself. It might harm society in the sense that it brings down society's moral standards, I guess. Prof: Okay, it might harm society. Mill's very clear that he wants to reject the idea that there are social rights, so he wouldn't accept that, but I don't think you should give up so quickly. Is there some other way of articulating what you have in mind there when you say harm society? What do you mean? If you unpack what does it mean when you say, "harm society?" It's tricky. Anyone else want to have a go? Yeah, right behind you. Student: I guess it poses a negative externality on society, and therefore... Prof: Okay, and what is the negative externality? Student: Well, I guess in the case of prostitution it could be the objectification of women in this case. Prof: Objectification of women, okay. How would we say that's a harm to society? Maybe it would be--again, Mill's going to reject the idea that society has a right not to be harmed. How can it play out in a way that doesn't involve making that claim that society...? Student: Well, I guess you could say it poses a negative externality on the individuals in the society by, I guess... Prof: On women. Student: Yeah. Prof: On women. It poses a negative externality on women. It reinforces the set of stereotypes and so on. Okay, so that might be one way in which prostitution causes harm. Any other ways? Yes, sir? Student: It could also be said that it has a negative externality because it undermines family values. Prof: It undermines family values. Student: Which would then, in turn, hurt children and the future generations. Prof: And that would, in turn, hurt children. Yeah? Student: Well, I think it's difficult also to disconnect prostitution from this sort of theoretical model from the actual model of prostitution. I think when we put pimps, and sexual slavery, and that sort of thing in the picture then the harm becomes far more real, and it's questionable whether people are voluntarily participating in these transactions to begin with. Prof: Okay, so if you're a really hardboiled Millian libertarian I think slavery's not an issue. Slaves are taken by force, but selling yourself into indentured servitude is not obviously something-- would Mill say you should be allowed to do that? It's your choice. It's tricky because you're really giving up your autonomy. You'll see in politics, we confront later in the course, what do you do with an election where a party runs, as in Algeria in 1991 saying, "If elected we are going to abolish democracy," and they get elected. In that case the Algerian military stepped in. If somebody says, "I'm going to sell myself into servitude," it's a voluntary act, should we save them from themselves? That's a very tricky one, but let's set those ones aside. And I think there would be similar issues with suicide for Mill. Should you be allowed to stop people committing suicide? Those are very hard. I think they're separate, though, from these issues about what we're calling externalities. People are saying, "Of course there are harmful effects of prostitution. It harms women. It undermines certain kinds of moral codes. What would Mill say in response to that? Yeah? Student: Well, I think for Mill it really depends on the context of an act. So you can be drunk at home, but you can't be a policeman drunk on the job. So if we look at prostitution if you're working as a free agent, sure, you can engage in prostitution, but if you're a father and you're married then he should probably view that as something not okay. So it really matters for him what the context is of a certain transaction. Prof: Okay, I think that's right, and you could imagine a more refined version of the harm principle that tried to incorporate that. I gave the example last time; I think I said, "Mill would presumably be completely comfortable without outlawing drunk driving and punishing that activity, but not punishing drinking." So presumably the principle would be some version of "interfere with human conduct as little as possible to prevent harm," right? So it's killing a gnat with a sledgehammer to outlaw drinking just because some people drive drunk. I think that's fair enough. But I think some of the people who were saying allowing prostitution has bigger negative externalities wouldn't give up that quickly. Anyone who thinks we shouldn't give up that quickly, that there's something else here? Nobody? Maybe they would give up that quickly. Okay, we'll leave that aside for a minute and come back to it. So if you're now bearing this in mind go back and reread On Liberty. I think what you'll come away with is it's actually quite confusing just what Mill means by harm. Because some harms, he wants to say, are trivial harms, so the harm you suffer through not getting a place in college in a competitive exam, right? He's not going to allow that harm ultimately to be dispositive. He's going to allow that to be outweighed by the benefits to society of competitive meritocracy. And remember our free trade discussion last time. So some harms are more significant than other harms and some harms are outweighed by utilitarian benefits. It seems like, some of the time, if Mill's saying, "Prohibition shouldn't be allowed," he's saying essentially, "Unless I intend to do you some harm. You might find it offensive that I sit around drinking all day, but it's none of your business. You might have family values which say that if there are people out there who are prostitutes, it undermines the family. That's your family value, thank you very much. It's not my family value," right? So you could take that view. And after all, if you think about the debate we've had in this country about gay marriage over the past decade or so, that's exactly the claim and the counterclaim, right? Some people say, "There's no reason in the world that gay people should be prevented from getting married. They're not hurting anybody else." And then people say, "Well, that undermines traditional family values," right? And the people who support gay marriage say, "Well, so what? Those are your traditional family values, but they're not our traditional family values, and why should yours...." This is, after all, what Mill says, the tyranny of majority opinion. So what? Right? This is supposed to protect individual freedom against it, so one person's traditional family values is another person's tyranny of majority opinion. How are you going to resolve that? What mechanisms does Mill have to resolve that? And I think you could read On Liberty with a fine-tooth comb and not come up with one, and so you could say, "Well, so much the worse for Mill. We thought we had this wonderful rights-utility synthesis where all good things would go together. We could respect individual freedoms, and promote social utilitarian efficiency based on science all at the same time, but actually it turns out to all be resting on a hill of sand, and the minute you walk on it you start sinking." And I think this is a good time to go back to the point I made to you right at the beginning of the course when I said you shouldn't be expecting the silver bullet in this course. You shouldn't be expecting to find the theory that answers all your questions. What instead you're going to find is particular insights that you can pick up and put into your bag of tricks and move on with. Because I think it is, at the end of the day, a good critique of Mill to say, "There is no single definition of harm and there's no good account of who makes the decisions about harm." And so to that extent his claim at the beginning of On Liberty that there's one simple principle and this is what it is, fails. Nonetheless, there are some important and enduring insights here that I think we're not going to want to let go of. One is, when you think about all of the ways human beings interact, and all of the things we do that cause us to have the possibility of bumping into one another, maybe it shouldn't be the case that there is a single definition of harm because different definitions of harm are relevant to different types of situation. This is a little like the point somebody here made that Mill would be interested in the context. If you think about physicians; we allow physicians to buy malpractice insurance in case they kill you by mistake when they're doing surgery, but we don't allow bank robbers to buy malpractice insurance so that if they kill you by mistake when they're robbing a bank they can get off. That tells you right there, there must be different conceptions of harm that operate in different circumstances. And indeed, if you want to start thinking about it a little bit more systematically, harm is treated very differently in different situations. Think about this continuum I've put up here. Some kinds of harm are completely excluded. You're not held responsible for them at all. If you have a death penalty and legal execution of people, of course you harm the person you execute. Certain kinds of wartime killing, of course you harm the person that you kill, but we allow it. We don't count it as a relevant harm. Then here, I'm going down this continuum, we say that the intention to kill is very important in the criminal law. Does anyone know what this term mens rea is in the criminal law? No reason you should, but somebody might. If you're charging somebody with a criminal offense the government has to prove beyond a reasonable doubt various elements of the crime, that the person did it, that in fact that the crime occurred and that there was something called mens rea, or criminal intent. "Guilty mind," that's what it means. And in the criminal law we make that one of the elements of the crime. That's why things like the insanity defense become so contentious because the person claims, "Well, because I didn't know what I was doing, I didn't have mens rea. My client didn't have mens rea. He didn't know what he was doing so he's not guilty by reason of insanity. He's not guilty," okay? Or if you think about--you might say, "Well, we do imprison people for vehicular homicide when they get drunk," and the person says, "But I didn't intend to kill them. I didn't know I was driving. How can you can you say I intended?" Interestingly there we come up with a doctrine in the legal system which we call constructive intent. And what is the doctrine of constructive intent? It's basically we say, "Well, any reasonable person would know that if drive down to a bar, and you have no way to get home, and you then drink ten beers you would be putting yourself in the position where could harm someone. So we're going to impute the intent to you even though you didn't have it." That's the doctrine of constructive intent. That's using exactly that kind of situation, so in the criminal law, to go back to Mill, we'd say "calculated to produce evil in someone else." It's got to be calculated by the person committing the harm, okay? So, why is that, you might say. Well, criminal actions are actions which bring moral opprobrium on people. We lock them up. They're really things we don't want people to do, and we want people to internalize the incentive not to do them. So it's about their intentional actions, and then we're punishing them for those actions. Therefore, when we put sanctions on them we want them to know that they could have behaved otherwise, and so we want the intent to be present, so constructive intent is sort of like intent, but not exactly. What about negligence? Negligence is even less like constructive intent. So negligence is the doctrine that let's suppose you live in a neighborhood where there are lots of children, small children, and you put in a swimming pool. And let's say the law requires that you fence in your swimming pool, but you leave the fence open, the gate open, and a child goes in and falls into your pool and drowns. The doctrine of negligence would say, "Well, yes of course you didn't intend that the child would drown, but you were negligent in leaving the gate open, so we're going to hold you responsible." So it's a little less than constructive intent. It's negligent. Any reasonable person would know that you shouldn't leave the gate open. So you were negligent. It's still a state of mind, but it's obviously not the same thing as intending to kill the child. And then we could go even further down the continuum and say, "There are some situations where we say, 'We don't care about your state of mind at all.'" So if an adult has sex with a 14-year-old and walks in and says, "Well, Your Honor, she said she was 20, and I thought she was 20. She looked 20," and perhaps that's true. Perhaps all of those things were true, she said she was 20, I thought she was 20 and she looked 20. "We don't care," we say as a society. We don't care. That's the notion of statutory rape. It's sometimes called strict liability. We're going to hold you liable anyway. And why do we do that? Presumably to give people the incentive to make sure as to find out. So when we say, "If she was 14 it's statutory rape, or if he was 14 and she was 20 it's statutory rape," there it is, too bad. We don't care what she said, and we don't care what you believed at the time because we want to ensure that people have the incentive to find it out correctly." Or we could think of Good Samaritan laws. This is the situation where you're walking along, and you see somebody drowning in a lake, and at very little cost to yourself, you could pull them out, but you say, "I'm late for class, never mind. I didn't push her in the lake." In many states we have good Samaritan laws, which if it really was at no trivial cost to yourself you could have done that, you can be prosecuted for failing to assist somebody in need of your help, okay? So that's obviously a very capacious definition of harm. I mean, after all, the fact that we're all sitting here rather than doing relief work in Haiti right now means presumably that some people are suffering, perhaps even dying as a result of our failure to go to Haiti right now. So once you go over this line into treating omissions to help as a form of harm, as we do with Good Samaritan laws, where will it end, right? So you can see from this that if you wanted one definition of harm to cover all situations that obviously isn't going to make any sense. Nonetheless, if you think about this continuum, where you fall on it has huge implications for how you're going to run your society. And more interestingly than that, I've given you a kind of static picture now, but these things actually change. Think about the drug thalidomide. Who knows what happened with thalidomide? Yeah, what happened with thalidomide? Student: It came out in the late '50s. It was used to treat morning sickness and it turns out that the racemate of the drug causes teratoma in children born to women taking the drug. And so it was basically a failure of the drug industry to look into the effects of the different structures of drugs in humans. Prof: Correct. That's a very good summary. And what's interesting about thalidomide from our point of view is that in developing this drug, which was given for morning sickness, the pharmaceutical companies didn't cut any corners. They did all the clinical trails correctly. They got the FDA approvals. Everything was done by the book. Nothing illicit was done, and at that time the general standard in tort liability--who knows what tort means, t-o-r-t? Not t-o-r-t-e, we're not talking about chocolate cake. What's a tort? Nobody knows what a tort is? Tort just means harm, right? So the general standard for tort liability was negligence, and they weren't negligent. They did it by the book. They got all of the approvals. They got the clinical trials. The FDA approved the drug. And it's a famous case because it was an instrument by which the courts decided to say, "You know what? We don't care. We don't care." They moved from negligence to strict liability. "We don't care that you did it all right. The fact is, there are all of these children who were born with missing limbs, and you're going to pay. You, the drug companies, are going to pay. We're going to hold you strictly liable. We're going to treat it like (the example I gave before) statutory rape." Now, you might say, "Well, why? Why would you do that?" And interestingly the move in American tort law from negligence to strict liability, those of you who go on from here to the Yale Law School will learn all about it because the intellectual giant of the move from negligence to strict liability was Guido Calabresi, long time Dean of the Yale Law School, and now a Federal Judge on the Second Circuit. He wrote a book called The Costs of Accidents. And the main argument of The Costs of Accidents was--this was actually about auto accidents. It was essentially utilitarian in that Calabresi looked at what had already stated in New Jersey at that time. If people rear-end one another in cars you can say, "Well, who was negligent here?" Was it the driver in front, or was it the driver in the back? If it was the driver in the front, why, "Oh, well, he stopped too quickly," or something like that, or the driver at the back says, "Well, my brakes weren't working or the driver at the front's brake lights weren't working." You have an argument. And they have their lawyer, and you have your lawyer, and you duke it out, and somebody wins and somebody loses. New Jersey said, "It's not worth it. It's a waste of court time. It's a waste of everybody's time. So we're going to make the law which says, in a rear-ending situation, the driver at the back pays, always." Kind of rough justice, sometimes maybe the brake lights on the front car really weren't working. "We don't care. It's just not worth the State of New Jersey's time to invest the institutional resources and all the rest of it to allow people to litigate these things. It just costs too much. It's not worth it." So Guido Calabresi came up with a little algorithm in which he said, "The standard should be to minimize the cost of accidents plus the cost of their avoidance." So you figure out the cost of all the rear-ending, right? And then you figure out, well, if we allow people to sue what's the cost of avoiding accidents that way, whereas if we don't allow them to sue for negligence, what's the cost that way? It's cheaper. It's more efficient. And anyway, there are probably some good side benefits; it gives the person at the back the right incentive to keep a distance they can stop, right? So we make a utilitarian judgment. The game's not worth the candle for negligence. That's one defense of it. Coming back to the thalidomide, minimize the cost of accidents and the cost of their avoidance. What do we want as a society? We want the drug companies to have the incentive to go the extra mile to do even more research than they have to do as required by the FDA, to buy the insurance because perhaps they're the deepest pocket. They have the resources to do the research and to buy the insurance. If you're a pregnant woman thinking about taking a morning sickness pill or not taking it, you don't have the resources to do extra research. So we put the burden on the party who can most cheaply avoid it. Now, the drug companies are saying, "That's outrageous. It's totally outrageous. We didn't do anything wrong and you're punishing us." Strict liability. So we moved, and the example of thalidomide is just the tip of an iceberg. In the whole of tort law there's been this thirty or forty-year move from a negligence standard to a strict liability standard. And a fascinating intellectual debate between Calabresi, who's the champion of strict liability, and Richard Posner, who you probably have read some of his books, a Judge in the Chicago Circuit who defends negligence as more efficient. So it's a debate between utilitarians in that sense. And if we had more time I'd have you read some of that debate. You might be convinced at the end of it that Posner wins intellectually, but as a matter of the politics of it, Calabresi wins. That is to say American tort law has moved, since the 1950s, away from negligence to strict liability. Let's think of another example. In the 1950s in America, a husband could not be prosecuted for raping his wife. There was no such crime. There was, in other words, a conclusive common law presumption that there was no such crime. Now you might say, why? Well, there's a historical answer. It went back to the suspension of the legal identity of the woman during marriage. Essentially, it goes back to the old patriarchal laws. The daughter was more or less the father's property and then he gave his property to the son-in-law as the husband's property. So the woman didn't have a legal identity as a person during marriage, and this is why she lost control of her property, right? And that was restored, ultimately, by the Married Women's Property Acts that recreated women's property rights over, say, inherited wealth or whatever it was. It didn't automatically become her husband's property. But there was this hangover from the nineteenth century that a woman couldn't be raped by her husband, and not only rape but other things like assault. So in other words, you could walk up to a woman you were not married to on the street, and punch her in the face, and get arrested for that, and convicted of assault, but if you did it to your wife there was no crime. So then we have the women's movement. We have a lot of feminist pressure and organization, and so we now have gone to a world in which marital rape is a felony in about forty-five of the states and the federal system. So it's gone from having a conclusive common law presumption against it, to making it a felony, and all the other things have gone as well, so inter-spousal tort immunity is gone. You can be prosecuted for assaulting--in other words, marriage is no longer a bar to things that would be criminal actions outside. So we've had a huge change in the law there from one definition of what counts as a relevant harm to a different definition of what counts as a relevant harm. In the first case it was an argument about efficiency, and the second is a political movement. The women's movement had this enormous impact on the redefinition of what sorts of harms the state should take seriously in marriage. Let me give you a third example. If you go into employment law, or housing law, or education, the American courts have, for many decades, at least since Brown versus Board of Education was handed down by a unanimous Supreme Court in 1954, they've been concerned with trying to get rid of discrimination. Discrimination's a kind of harm, right? So the question is; what do you have to show? What do you have to show to convince the court that you've been harmed and there should be remedy? What do you have to show? And during the Warren Court it was somewhat like strict liability. The Warren Court, Earl Warren was appointed by President Eisenhower in 1953. Eisenhower thought he was appointing a conservative Justice, but he turned out to be wrong. Earl Warren was, perhaps, the most liberal Chief Justice of the twentieth century. And the court, under his leadership, developed the idea that all you had to do was show there was a pattern of discriminatory effects. You didn't have to show that anybody intended to discriminate against anyone. So in the education area, separate but equal, the court said is inherently unequal. We're not saying that the White southerners are necessarily prejudiced against Blacks, many of them were, but we're not going to get to that question. We're just saying separate but equal is inherently impossible. It's an oxymoron. You can't have it, and we're saying that without reference to the intentions of school administrators or anybody else. In housing patterns you do these kinds of studies like Ian Ayres and the Yale Law School here is well known for having done. When people with exactly the same objective characteristics, same income, same employment history and so on going for a mortgage it turns out that African Americans are denied mortgages at a higher rate than non-African Americans. You don't have to show that that mortgage officer was a racist, or was intending. You produce the statistics, you show people with these objective characteristics who are African American get denied and those with the same characteristics who are not African Americans don't get denied. It's a patter of discriminatory effects. We don't have to get into something like mens rea or whatever is going on in the bank mortgage officer's head. So that was the standard also in employment discrimination and many other areas of the law of discrimination under the Warren Court. All you had to show was a pattern of discriminatory effects. But the Warren Court was gradually replaced first by the Burger Court, then the Rehnquist Court, and now the Roberts Court, and in that conservative evolution discrimination law has gone this way on the continuum. Now you can't get a remedy unless you can show the intention to discriminate on the part of some public official. So let's say in zoning ordinances, you've got to show that some public official actually tried to do the zoning in such a way as to exclude Blacks from a certain neighborhood, let's say, before you can get a remedy. So in employment, and housing, and education, and all of these areas of discrimination we've gone the other way as a society, right? We've gone from a very capacious standard which would allow a remedy just from the objective indicators, the patterns of discriminatory effects, now we treat it more like the criminal law. We say, "Unless you can show, you can establish in court that there was some particular person in that bank denying mortgages who intended to discriminate, no remedy." So it's much harder, of course, to get remedies. So those are three examples. Those are three examples, the thalidomide, the marital rape, and the discrimination, where you can see that there's enormous flux in our society as to what counts as a relevant harm. And if we had time, I strongly suspect if we had time to have a debate in this room on those three topics it's not like we would all agree as to whether in discrimination law it should be patented as discriminatory effects or intent. We'd have differences of opinion about that that would stem from our assumptions about the appropriate role of government, how intrusive it should be and so on. Or probably there would be less disagreement today about the martial rape than there would have been in the 1950s, and I suspect intuitions would go all over the place about tort liability and the thalidomide. So what does that tell us? And this is the second reason I think it's really instructive to work through somebody like Mill even if, in the end, Mill doesn't have the answer. I think one of the lessons of this course, and you can see it here very dramatically, is that it is impossible to get rid of political disagreement. It is impossible to reduce political choices to scientific choices all the way down. Science can play important roles in making political choices, but it can't make them for us, so that one of the big goals of the Enlightenment to come up with scientific principles of politics is never going to be perfectly realized. That doesn't mean it can't be partially realized, but it's never going to be perfectly realized. You can't wring the politics out of politics. There's no way to do it. And so you will see when we come to read Marx and he talks about his utopian communist society, one of his one-liners is that, at the end of the day when we finally have the true communist utopia, "Politics will be replaced by administration," right? That's a bumper sticker for saying we're going to wring the politics out of politics. Bentham, looking for the right objective utilitarian calculus, right? Mill wants to say, "Well, once the harm's triggered we can do the cost-benefit analysis on scientific principles." We talked about that last time. Never works. It never works all the way down. It doesn't mean to say that scientific thinking can't condition our normative choices, but you go back and think through these three examples that I've just given you these are basically normative choices, right? In the 1950s when this issue about marital rape came up, a lot of things were said that are similar to what is said now about gay marriage. People said if the state starts prosecuting husbands we're going to destroy the traditional family. And people the women's movement said, "Great! We're going to destroy this traditional family because men shouldn't be allowed to rape women. At least in this respect we won it," right? So traditional values only takes you so far because the harm principle is by definition, a critical principle for looking at traditional values and saying, "How much should we cater to them, how much not," right? So these are political choices. The choice about thalidomide has huge distributive consequences, huge, enormous. You're going to say to pharmaceutical companies, "You're going to be liable from now on for the harmful effects of your drugs regardless of whether you got the FDA approval." It's a huge burden to put on them. Notice what we could say. We could say, "No, (as a famous libertarian Judge Learned Hand said) in society losses must lie where they fall." What does that mean? It basically means the women who had the thalidomide children and the children themselves must internalize the loss, or we could say, "We'll socialize the risk." We'll say, "In these kinds of situations when there was no wrongdoing in the sense of cutting corners; the government will bail them out." Just like we did after 9/11, we created this huge fund and paid compensation to the relatives of people who died in 9/11. Many people die all the time in disasters where we don't do that, but we could. We could have massive social insurance for unexpected harms. So the choice to say, "Losses must lie where they fall," with Learned Hand, or that we should socialize it by saying, "Well, if you're born with a physical deformity, obviously it's through no fault of your own, but really you can't blame the manufacturer of thalidomide either, so the state will pick it up. We as a society will do it." That's a different choice. Or if you say, "No, we're going to make thalidomide--the manufacturer internalizes this," that's a different choice. My point here is it's always a choice. It's always a choice. We'll come to read a libertarian called Robert Nozick later in the course, and one of his one-liners is that, "The fundamental question of political theory is whether or not there should be a state." But we'll see that that's a bit like saying, "The fundamental question of dental theory is whether or not there should be teeth," because in fact everything involves collective choices. Even the collective choice to let the loss lie where it falls, that is itself a collective choice. So you can't wring the politics out of politics. And at least one goal of the Enlightenment, we're going to see, there's this idea of replacing politics with science, or in Marx's formulation, of politics being displaced by administration, can never be perfectly realized. And that's an important insight we get from thinking about trying to apply Mill's harm principle. Okay, see you on Monday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
20_Contemporary_Communitarianism_I.txt
|
Prof: Okay, good morning. One prefatory point before we get into today's lecture. That's occasioned by having you read MacIntyre, but I ought really to have mentioned it before. MacIntyre's book, After Virtue, is in part a conversation with major figures in the tradition who you have not read, or at least not in this course, Aquinas, Nietzsche, Hegel and many others. And of course this came up with John Rawls, and indeed Robert Nozick, both of whom depended on arguments from Immanuel Kant that we haven't studied in this course either. And so one question that arises is, well, to what extent are you responsible for understanding the people on whom they are commenting? And of course you're entering into an ongoing conversation among these thinkers that's been going on for centuries, and to some degree you just have to jump in somewhere. Nonetheless, for the purposes of our course here you're certainly not responsible for understanding Kant's ethics, and indeed I could give several lectures on why it is the case that Kant would not have agreed either with Rawls's interpretation of his own work or with the Rawlsian enterprise. But we're not really interested in Kant in this course, but in Rawls, in that instance. So to the extent he depends upon a faulty reading of Kant's Groundwork of the Metaphysics of Morals, that's not a question with which we're engaged in this course. It's not that we're taking a position about it; we're just agnostic. Likewise, with the various thinkers that MacIntyre engages, you're not expected to know Saint Benedict, or Nietzsche, or Aquinas, or anybody else, or indeed Aristotle, about whom I'll have quite a bit to say today, except insofar as they are building blocks for MacIntyre's argument. So MacIntyre's work, in some ways, is a commentary on the history of ideas, but really it's first and foremost an argument, and we're interested in it as an argument, and that's how we're going to evaluate it. So, of course, it's an invitation to you, later on, to go into some of these thinkers in-depth with whom he is engaged and see whether ultimately you agree or disagree with the way in which he engages those thinkers, but that's not our agenda here. Our agenda here is to think of Alasdair MacIntyre as somebody who's making an argument in his own right, and that's how we're going to engage with his work. He is a political theorist who currently teaches at Notre Dame. Interestingly, they say that the hand that rocks the cradle controls the person forever after. He started out, I think he was raised in a Catholic-- he had a Catholic upbringing, but early on in his career, he must be well into his 80s now, early on in his career he wrote a book called Marxism and Christianity, and he was clearly wrestling with who wins out of Marx and Christianity. And in that book he concluded the Marxism won. And in his early incarnations he was a fairly conventional Marxist, but then gradually he came full circle and ended up rejecting not only Marxism, but the larger Enlightenment project of which Marxism, as you all know, is only one part. And he ended up affirming a kind of traditional mix of Aristotelianism and the Catholic tradition that informs his argument both in After Virtue and then a subsequent book, which I'm not having you read, called Whose Justice? Which Rationality? So he is somebody who, in an important sense, has come full circle. And I think that's an important piece of background to know in understanding his work After Virtue. He's written many other books too but this is the book for which he will be remembered. You might think it odd that a book with a title like that could have become a bestseller, but it really was a philosophical bestseller when it was published, I believe, in 1984, and the edition you have includes an afterword where he responds to critics of the original book. So who is Alasdair MacIntyre, and how does he relate to the historical anti-Enlightenment thinkers we've already discussed, namely Burke and Devlin? Well, he is very much in the spirit of the tradition in which they both wrote, although, as you could probably guess from his historical trajectory, one thing that differentiates him is that at least for much of his life he thought of himself as somebody on the political left, whereas they were people on the political right, and we'll come back to the significance of that later. He is part of a general undertow or reaction against the Rawlsian enterprise in political theory. Other thinkers, which you're not reading but with whom he would naturally have some elective affinities, are the philosopher Richard Rorty who died recently, who wrote a fabulously good book called Philosophy and the Mirror of Nature, which was a critique of the Enlightenment project in philosophy. Rorty's argument was basically that the Enlightenment quest with certainty was a fool's errand. That there is no such thing as certainty to be had, we've discussed this quite extensively, of course, in connection with the early versus late Enlightenment, which is not a distinction Richard Rorty made. But in any event, he made the argument that the Enlightenment quest for certainty was a fool's errand begun basically by Descartes and taken to its apotheosis in Kant's Critique of Pure Reason, and that philosophers from Descartes to Kant got engaged in this hopeless endeavor of justifying philosophy from the ground up from indubitable premises. And when they failed to do that they thought that some important philosophical failure had occurred, whereas Rorty's point was they should never have been engaged in that enterprise to begin with. And he connects importantly to the modern pragmatist tradition of Dewey, and Peirce, and James, and to the postmodernist thinkers like Lyotard, to some extent Michel Foucault, and others that we don't have time to read in this course. So Rorty is an anti-modernist, but he's a postmodernist anti-modernist if you want lots of jargon. He thinks we should get beyond the Enlightenment project. He has also written some about politics, and indeed he has a political analog of his philosophical argument, which the bumper sticker version of it is that thinking we have to justify our political institutions from the ground up is also a mug's game, and indeed a dangerous mug's game, because when we fail to do that we start to think that there's something wrong with our political institutions that they're illegitimate because we couldn't justify them from the ground up successfully by the terms of the Enlightenment project. Therefore, they're not justifiable. And this, Rorty thinks, puts us at a competitive disadvantage. Though he was writing during the Cold War, so with our antagonists behind the iron curtain, but I think he would make the same argument were he alive today about fundamentalist antagonists who we would, by Rorty's way of thinking, be putting ourselves at a competitive disadvantage with by holding ourselves and our institutions to a standard which cannot be met, and then when we fail to meet it losing confidence in our institutions. So we could have read Richard Rorty in this course, but the truth is, and this is a dogmatic statement and maybe some of you will second-guess me on it later, the truth is Rorty is a much better philosopher than he is a political theorist. And so I've chosen to have you read MacIntyre who I think is a better political theorist than he is a philosopher. There are others. Perhaps one of the most famous is Michael Walzer who wrote a book called Spheres of Justice, who also rejects the idea that the values guiding politics can be justified in a logical sense from indubitable first premises and generate guides for action in politics that must be compelling to any right-thinking rational person. All of these thinkers, Rorty, Walzer, MacIntyre sometimes get grouped under this idea of communitarianism. Communitarianism. And communitarianism is linked to the anti-Enlightenment endeavor in that it is the ahistorical version of tradition. That is instead of with Burke and Devlin appealing to tradition as the basis for our values, communitarians appeal to the community-accepted values as the basis for what should guide us. Now obviously the two things are connected and we'll see they're deeply connected in MacIntyre's historical account because communities are shaped by traditions. But at the end of the day what's going to be important for us is that the individual is subservient to the community rather than the community being the creature or the creation of the individual. So the community comes first. The individual is born into the community rather than the community being the product of some contract, or creation, or construction of the individual. So that's what all of these thinkers share in common. Now, one of the things that makes MacIntyre's book a little bit difficult to read is it's a work in the history of ideas that's written backwards. That is he starts with the present and works back to the ancients. It's a very interesting thing to do. In fact, I once taught the political science 114 course, the intro to the history of ideas, and partly inspired by MacIntyre's effort I did it backwards. I went from Rawls to Plato. And there are interesting pedagogical challenges there, and I'm not sure whether it's worth doing just for its own sake, but MacIntyre does it for a reason; not just to be cute which I think maybe what I was trying to do. MacIntyre does it for a reason and his reason is that he thinks that at sometime around the beginning of the Enlightenment the Western intellectual project went badly off the rails. And in some way his argument is an analog of the argument I made to you about Locke and workmanship. Because, after all, think about what I said about Locke and workmanship. I said there was basically a coherent story. God created the world. He has workmanship knowledge and rights over it. He creates humans with the capacity to act in a god-like fashion, miniature gods, although they're constrained by God's will, and it all fits together as a kind of coherent whole. Once you buy into the premises it all fits together, but then what happens in the history of the workmanship model is people start to secularize it, and so start taking on bits and pieces of the original workmanship idea without the unifying assumptions that gave that model its coherence. And we saw the various difficulties everybody ran into in doing that, Marx, and Nozick, and Rawls and many others. So MacIntyre does something analogous in his book. What he wants to say is that the task of coming up with compelling moral values to guide politics made sense in a framework of assumptions that we inherited, but we inherited in a kind of degraded way. That the unifying assumptions that used to give political morality its coherence have been jettisoned as a byproduct of the Enlightenment project and for that reason we need to go back in time and see where the project went off the rails, see what it was that happened that caused modern thinkers to get involved in this fool's errand of justifying morality from the ground up. Justifying morality from the ground up cannot be done because of the expectations about justification that we have developed, but MacIntyre's claim is you can't see that unless you go backwards in time to understand how and where the project went off the rails. So that's the big enterprise of his book, and we'll mostly get into that big enterprise on Wednesday. But I want to focus at the start on the beginning of his book, and the beginning of his book deals with the symptoms of our problem. Perhaps the most important symptom of our problem you've already confronted in this course when we talked about the transition from classical to neoclassical utilitarianism and the rise of emotivism, Charles Stevenson and all that. Does anyone remember? Maybe you've already forgotten all of this it was so long ago now. Remember Stevenson said--this is the guy who didn't get tenure in the Yale Philosophy Department because he seemed to have this extreme relativistic and subjectivist view of ethics where moral choices were just differences in taste, differences in flavors of ice cream. You say the welfare state is good. I say the welfare state is bad. It's just like saying chocolate ice cream is good or strawberry ice cream is good. The differences about morality are just merely subjective differences. So we go from this certainty, subjective certainty in the early Enlightenment as making politics like mathematical geometric proofs for Hobbes and Locke to the mere subjectivism of the mature Enlightenment which produces this kind of relativist morality where everything is just subjective opinion that morality is nothing more than emotion, and there's no particular reason even to think we have the same emotions. Remember that, as I think I said to you at the time, Stephenson was criticizing David Hume, who's another important utilitarian thinker who we didn't have time to read in this course either but you should all read at some point in your lives. And Hume had said, "Well yes, you can't get any important statements about what ought to be the case from empirical statements about the world. There's no way to get from is to ought," as Hume said, "But, you know what? Most people are pretty much alike. Most people are pretty much the same." So we can, to use the jargon of utilitarianism, we can make pretty confident interpersonal judgments about people. People are pretty similar, and so what's good for one person is likely to be good for another, and that's why Hume has this rather cryptic one-liner that scholars have debated, to the effect that "if all factual questions were resolved no moral questions would remain." It's this notion, well, people are pretty much the same and so even though morality is rooted in people's emotional reactions to situations it's not a big problem to having a morality that can form the basis of a society. Stephenson said, "How do you know? How do you know? How do you, David Hume, know? Maybe Adolph Eichmann has one set of emotional reactions to the prospect of shipping people off the concentration camps, and you and I have a different set of emotional reactions to shipping people off to concentration camps, and if you're saying there are no principles by which we can adjudicate among those reactions, those emotional reactions, you're throwing us into a sea of relativism." And so when you get to emotivism, you're getting to this world in which we are completely without instruments for making moral judgments when people disagree. That is the emotivist culture. It is a culture of tastes and not of interpersonal judgments. And one of the things MacIntyre wants to say is that all of this becomes inevitable in the seventeenth century. It's just a question of time. It's just a question of time. Once you look at what was really going on in the beginning of the Enlightenment you're going to wind up with emotivism. Just a question of time. And the politics that comes out of it is pretty ugly. The politics that comes out of it basically leaves you without standards of moral judgment and indeed without questioning the raw assertion of power. So it's not only that in philosophy we wind up with emotivism, but in politics we're ultimately going to wind up with Nietzsche. We're going to wind up with kind of nihilist assertion of the inevitability of the triumph of the will, the triumph of power. So again, Nietzsche is somebody else I wish we had time to talk about in this course, but you'll have to read him for our purposes through the eyes of MacIntyre. So it all goes back around the late sixteenth and early seventeenth century and then we're just rolling down this hill into the abyss of modern subjectivism in philosophy and nihilistic politics. Pretty depressing story you might think. So that's one symptom, that we live in this what MacIntyre wants to describe as an emotivist culture. Another symptom of it, which you might not find as dispiriting as what I've just said, is what we--this isn't MacIntyre's terminology, but I think it makes the point--is a world in which instrumentalism has triumphed. A world in which there has been a total separation between means and ends. One symptom of this, again, not one he mentions in his book, but I think captures neatly what he's talking about is the proliferation of business schools. A hundred years ago there was no such thing as a business school in a university. Nobody had ever thought of the idea of even having a business school. And what's, I think, notable about business schools is that they're teaching skills that are unrelated to purposes. So business schools, after all, are trying to teach people how to become good managers. Whether you're going to manage the Coca Cola Corporation, or whether you're going to mange Goldman Sachs, or whether you're going to manage a university. The assumption is there are certain kinds of skills that managers have, that it's important to know. But business schools will not teach you whether it's a good idea to manage Coca Cola, or Goldman Sachs, or Yale University. That is not what business schools are about. So business schools, if you like, are predicated on the divorcing of means from ends. They're teaching certain kinds of instrumental skills that you can find helpful regardless of what the enterprise is you're going to end up managing. Being a good manager is being somebody who is inherently an instrumental person. And of course that leaves unanswered the question, "Well, but shouldn't we attend to what it is we are managing?" After all, that was a question that came up in our very first lecture in this course when we talked about the Eichmann problem; that he didn't care. He wanted to do well. He wanted to impress his superiors. He wanted to get an A. He was happy shipping Jews around the Third Reich to concentration camps as well as he could, but he would have been equally happy shipping munitions parts, or for that matter office supplies. It wasn't important as far as he was concerned. He wanted to be a good manager. So this is a very twentieth-century kind of preoccupation that we put the goal, the purpose, the ultimate endeavor aside, and we say, "What are the characteristics of being an effective manager?" To use the philosophical jargon it is an erratically anti-teleological view. Teleology, teleological, have I told you what teleological--what does teleological mean, somebody? Student: > Professor Ian Shapiro: Right, telos comes from the Greek word telos or purpose. Goal-directed, right? MacIntyre thinks that the rejection of teleology is a huge problematic enduring mistake, and I'm going to come back to why in a few minutes. But first I want to return to the first symptom I mentioned of our times. There are these two symptoms, the rise of subjectivism and emotivism and the nihilistic kinds of politics it brings with it on the one hand, and secondly this rejection of teleology on the other hand, and I'll say a little bit more about each of them. "Neither of them is what is seems," says MacIntyre. Who knows what the TV program that used to be on CNN for a long time called Crossfire was? Anybody, anyone ever see Crossfire on CNN? Yeah? Tell us how it works. Take the microphone and tell us how it works. How did it work? Student: > Professor Ian Shapiro: Anyone? You might be too young. It's kind of sad. Yeah, some of you might not be quite too young. How did it work? Student: I think it was like a point-counterpoint exchange. Professor Ian Shapiro: Yeah, so how did it work? Student: So I'm not sure going into it whether you knew which side you--you definitely had to have known which side you were debating, or no? Do they just kind of give it to you, and then you either debate for or against a certain thing, and then there was a judge at the end who decided who the winner was? Professor Ian Shapiro: Basically, except for your last point. There was no winner. I'll come back to that. But basically you've got it right. The idea was they have a left-wing host and a right-wing host. So they would have Robert Novak as the right-wing host and Michael Kinsley, say somebody like that, as the left-wing host, and there would be some topic du jour, whether it was partial birth abortion, or whatever it was, affirmative action. And what would happen was they would then usually have two guests, and the guests were chosen also to be sort of ideologically different. And the Novak type person would fire questions at the left-wing guest, and the Kinsley like person would fire questions to the right-wing guest and they would argue back and forth, and it would get more and more voluble and impassioned. And then at two minutes to eight the commercial would come on and it would end. Why do I bring this up? I bring this up because of MacIntyre's observation right at the beginning of the book where he says there's a certain odd feature to moral argument in this emotivist world. There's a strange feature. On the one hand it's subjectivist in all the ways we've talked about. Everybody's views are equal to everybody else's. There's no authoritative figure. There's no authoritative figure to settle our disagreements, at least not an earthly one, and everybody is what they are and who they are and that's that. On the other hand, MacIntyre says, "If you look at things like abortion, or affirmative action, or nuclear weapons, people argue about these questions as though there were a right answer." They give reasons for their views. They try to show the other side as being hypocritical. They want to say, "My premises are more plausible than your premises." They argue with each other as though there were an answer to this question, should we outlaw abortion, or should we outlaw partial birth abortion. The arguments they get into suggest that everybody's assuming there is an answer to that question, but actually nobody expects the question to be resolved. And that's why I mentioned the Crossfire because what could never have happened on that TV show is sort of, at 7:46, Michael Kinsley turning to Novak and saying, "Hmm, you know, I never thought of that. Actually maybe you're right." If they did that, first of all the sponsors would pull their commercials. Kinsley would be fired. That's not what it's about, but then it's bizarre, isn't it? Because if everybody agrees that we're all subjectivists and that all our views are equally tenable or untenable, which they seem to, then why is everybody going through the motions of arguing like this? Why is everybody saying, "You don't make any sense, and this is misuse of evidence, and you're blah, blah, blah. And look, my argument's much stronger, and blah." Why would anybody bother if we really believed the subjectivism which we seem to take for granted? That, for MacIntyre, is the real symptom of what's wrong with our circumstances. The fact that we engage in interminable moral arguments that we do not expect to be able to resolve is the symptom of the malady of our time in his view because it suggests a kind of thirst and a set of expectations from the past, he wants to say, that we need to be able to recover. Because the fact that we carry on arguing suggests we don't want to accept this emotivist culture. We're not comfortable with it. It's not emotionally, morally, psychologically, philosophically, satisfying to us, not even acceptable. But so he thinks one of the things we need to be able to do is account for this puzzle, this puzzle that we engage in moral argument using the forms of persuasive reasoning that we don't actually expect to resolve, so that moral argument has this quality of Crossfire. So that's the one thing that we need to get some kind of grip on if we're going to understand what's wrong with emotivist culture. The second is this problem with teleology. They turn out to be related, but here's the problem with rejecting teleology. If I walked in here one morning and got up on this stage and I said to you, "Well, this morning I got up, got dressed, went for a run, came back home, took a shower, got dressed again, started walking down to the office. I crossed down to Orange Street, and then I crossed Cannon Street, and I got down to Whitney Avenue. At some point pretty soon you'd say, "What is the point of this? Why is he telling us this?" Human beings always want to know the purpose. What is the point? So we will never be satisfied with any activity that is pointless, that doesn't have a point. And the Enlightenment endeavor of trying to be agnostic about purposes and scientific about means is never going to be satisfying to us for that reason. People want to know the point. They want to believe their existences have a point, and if they don't they become disaffected, bored, agitated, unhappy, or worse. MacIntyre actually has a brilliant little essay called Epistemological Crises and Dramatic Narratives where he points out that if a young child asks you why the earth doesn't fall down, you tell them, say, a story that it's being held up by a giant, giant's holding the earth in his hands, that's why it doesn't fall down. That's adequate for a while, and then they ask for another story when they stop believing in giants. But his claim is, it's something about the structure of human psychology that even explanations rooted in physics ultimately take the form of narratives. People want to be able to tell a story that we fit into, that has some point or purpose; that our basic understanding of the world is as teleological purposive creatures who tell narratives to give point to their existence. And we're going to become uncomfortable if we don't have a way of understanding politics that has a point. So that is the symptom of our plight, that we live in this emotivist world that we can't accept, and we have this bizarre love-hate relationship with it when you look at the kinds of moral arguments we actually engage in. And secondly we live in this world in which we have tried to cope with the deep pluralism Rawls writes about by taking goals off the table, purposes off the table and seeing can we just be instrumental. So if you want another political theorist we don't have time to read, but who has a good one liner to capture what MacIntyre thinks is the problem, it's Rousseau's line in the first paragraph of The Social Contract where he says, he's going to come up with a design of institutions for society "taking men as they are and laws as they might be." Taking men as they are and laws as they might be, and the reason MacIntyre would think that problematic is, taking men as they are, men and women we might say today, as they are, ignores important questions about how they have come to be as they are and what the role of morality is in shaping and reshaping human nature. So the title of the book is After Virtue, and virtue, what modern philosophers call virtue ethics, come out of a different tradition than anything we've considered thus far in this course, namely the Aristotelian tradition. Aristotle was the person who talked about the virtues, and what MacIntyre wants to say is, "We are at some important level that we don't fully appreciate or understand, products or the inheritors of a kind of degraded Aristotelian tradition." We have taken over concepts and categories for thinking about ethics from the Aristotelian tradition, but in a way that has become degraded, in a way that abandons the most important assumptions behind the Aristotelian tradition that make it all hang together. And the two key notions, the two analytical devices that make this argument work are what he calls a practice and a virtue. Practice comes first, and I'll say a little bit about that, and then I'll say a little bit about virtues, and then we'll go into his argument in more detail on Wednesday. A practice he says here, "Any coherent and complex form of socially established cooperative human activity through which goods internal to that activity are realized." Not an engaging sentence. Let me try and give it content for you, so first of all the idea of a practice. This is the intuition. When you walk into a class at Yale for the first time, say, as a freshman, think about what you don't do. You don't say to yourself, "How should this class be run?" You don't immediately interrupt other people and say, "Let's all decide how to run this class. Shall we vote on it? Shall we talk about it?" That's not what you do, right? When you walk into your first Yale class as a freshman you sit down, you look around, you say, "What's going on here? What are the norms? What's expected of me? What am I supposed to do?" That's what you say to yourself. So right there MacIntyre wants to say the social contract metaphor is really bad, it's the misleading of the human experience because people don't create tabula rasa. Rather people are born into practices that they inherit from the past and reproduce into the future. A practice, it's complicated. It's already socially established. It's ongoing when you discover it. People have been teaching courses at Yale for centuries and there have been freshmen who have walked into them and saying, "What do I do now? What's expected of me?" So the point is that the practice precedes the participants, not the other way around. So that's the first idea, a coherent and complex--it's coherent in that it has some goal, purpose. Enlightenment, let's say, is the purpose in this course, not in the sense of the Enlightenment, but enlightening you. Socially established, it's cooperative that everybody, he wants to say practices are not coercive (we'll come back to that later), it's cooperative. Human activity through which goods internal to that activity are realized. So that's an important term, internal. And here he has in mind let's suppose you're playing chess. You're playing chess with me and I have to go and answer the phone in the middle of the game. And while I'm not in the room you take one of my pawns off the board, I come back and you win. That's not playing by the rules. That's not an internal realization of a good. That's what we would call in his terminology, "External." So the idea of a practice is there are rules constituting the practice by which you have to excel. So you have to learn the rules. Cheating doesn't count. So that's the notion of a practice. I'll go into it in more detail. Virtues are what give practices their point. Virtues have to do with the goals imminent in practices. He says a virtue is "an acquired human quality, the possession and exercise of which tends to enable us to achieve those goods which are internal to practices and the lack of which effectively prevents us from achieving any such goods." So, I'll leave you with this thought and we'll pick up from it on Wednesday. "What human beings want is to excel internally in practices," says MacIntyre. You've all heard the phrase "he's a pitchers' pitcher." When we say "he's a pitchers' pitcher" what we have in mind is the notion that he's so skilled that only a true pro can appreciate how skilled he really is. So if I write books and I also build sheds, if I show my books to people who know how to build sheds and they say, "Oh yeah, a really good book," and I show my carpentry to a bunch of nerdy academics and they say, "Oh, that's really good," that's not going to be satisfying to me because I want to be a pitchers' pitcher. I want people who know about books to be impressed with my books, and I want people who know about carpentry to be impressed by my sheds. That's the notion of internal goods that every practice has goods by reference to which you excel within that practice. You don't want to win at chess by stealing the pawn when the person's not looking. You want to beat them in terms of the norms and rules of playing good chess. So the notion is you walk into the classroom, you want to get an A, but not by downloading a paper off the internet. You want to get the A by reference to the norms and practices governing what goes on in the classroom. So that's the basic idea of virtues being internal to practices and giving them their point. And MacIntyre wants to say that these two terms, these practices and virtues capture a lot more that is relevant about human psychology than the assumptions that drove the Enlightenment. And we'll start with that on Wednesday.
|
The_Moral_Foundations_of_Politics_with_Ian_Shapiro
|
7_The_Neoclassical_Synthesis_of_Rights_and_Utility.txt
|
Prof: So last Wednesday I asked you to suspend disbelief and bear with me as we worked our way through the Pareto system as a way of backing into the political theory of John Stuart Mill, which is going to be our subject today. And I also promised you that a side-benefit of Wednesday's lecture was you would get to learn everything you ever needed to know about neoclassical economics. That in fact is true. Everything you ever do in economics is basically derived from or built from those simple ideas that Pareto and Edgeworth put together. So it is, indeed, I think, a side-benefit of working through it. But now what I want to do today is integrate what we saw in the Pareto system and the remarks I made about Stevenson's emotivism and philosophy, and come into the central arguments in political theory that are informed by these mature Enlightenment ideas. And you're going to see why I talk about the mature Enlightenment with respect to Mill further on in today's lecture. As you can see here, I talk about Mill as attempting to synthesize rights and utility. And you might think, "Well, okay. How's he going to do that?" Some of you who know a little bit more about Mill may also think it's odd that I have chose his little essay called "On Liberty" for you to read, in explaining his utilitarianism, when in fact Mill wrote an essay on utilitarianism which I'm not having you read, although I'm certainly not prohibiting you from reading it. But I think that what you can see is that the synthesis of rights and utility can be approached from either end, and I'm going to approach it from the rights end at least initially, and then we'll worry about the utilitarian end of things later. But I think Mill's basic view is whether you start to develop a fully satisfying conception of individual rights, or whether you start to develop a fully satisfying conception of utilitarianism, you're going to end up incorporating the other one of those two into your account. Let me tell you a little bit about who John Stuart Mill was. He was the son of James Mill, who had been a contemporary of Jeremy Bentham's. Indeed, not only a contemporary, but actually a disciple of Jeremy Bentham's, and a true believer in Benthamite utilitarianism, including in the matter of the education of his son. He was very concerned to give his son the most efficient possible education in order to get him to achieve at the highest level. And so he was, what we would today call, home-schooled. There were governesses and schoolteachers brought to his home. He never went to school, and indeed he turned out to be a brilliant child. He was doing differential calculus at a very young age. He was speaking Latin and Greek in his teens. He was just an astonishingly smart child, and so they ramped up his education at an incredible clip with the result that, by the age of 21, he actually had a nervous breakdown. He had no friends. He had no life. He was a miserable brilliant nerd. And he never entirely recovered from that experience, and nor did he ever quite absolve Bentham and his father's single-mindedness from responsibility for doing that to him. And he was a somewhat pained and tortured person later in life. I think he never quite shed the scars, but his wife Harriet, who was a very interesting intellectual in her own right, and wrote much of Mill's famous essay that appeared over his name on the subjugation of women. And indeed, some Mill scholars think that Harriet also had a big role in the writing of On Liberty, but that's more speculative. So he never quite got over that early shock-and-awe utilitarian education, but he also never entirely shed the commitment to the idea that utilitarianism is the best system for thinking about politics. At one point he says, "I do endorse utilitarianism, but only in the largest sense of man as a progressive being. We'll come back to what that might mean as we proceed. So Mill does have one useful characteristic in common with Bentham. I think it's the only one. As I said, that Bentham is one of these monomaniacal people, and what makes him useful to us is he takes an idea and runs with it to the absolute extreme, and that's useful because you can see its assumptions in a very sharp and stark light. Mill is somebody who is aware of the infinitely complex nature of human existence and is not a Johnny One Note in a sense that Bentham was, and you'll see this very quickly as we get into his argument. Nonetheless, he shares in common with Bentham the feature that he reduces his doctrine to a single paragraph, just as Bentham did. It was the opening paragraph of his Introduction to the Principles of Morals and Legislation. With Mill it comes about twelve pages in on the Hackett edition that you're reading. He says categorically, The object of this essay is to assert one very simple principle; as entitled to govern absolutely (this sounds quite unequivocal) the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties or the moral coercion of public opinion. That principle is that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number is self-protection. That the only purpose for which power can rightfully be exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or to forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise or even right. These are good reasons for remonstrating with him, or reasoning with him, or persuading him, or entreating him, but not for compelling him or visiting him with any evil in case he do otherwise. To justify that (that is, to justify compelling him), the conduct from which it's desired to deter him must be calculated to produce evil to someone else. The only part of the conduct of anyone for which he's amenable to society is that which concerns others. In the part which merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign. Anyone think they have any doubt about what he's saying? I mean, setting aside whether you agree with it. Anyone think you feel pretty clear about what he's saying? Never mind whether we agree with it. Okay, only one person feels clear. Who feels unclear about what he's saying? So we have one person who's clear, nobody who's unclear, and 136 undecided? Wow! I mean, isn't it clear? As I say, I'm not asking you whether you agree. It's just saying, unless you harm somebody else you've got to be left alone, and the statements a) "Leave you alone," and b) "Stop anyone else who wants to interfere with you," harkening back to Bentham's point that the law should stay the hand of a third party, right? That's what he's saying here, yeah? It's not rocket science. I mean, it's very direct. Okay, so that's what he's saying. You might say, "Well, okay." I mean, just to give you an example, he's saying if one of you comes to me at the end of this course and says, "Professor Shapiro, will you write me a letter because I want to go to law school?" And I say, "Well, I've come to know you, and you've got a lot of good qualities and skills, but you are not a lawyer. Trust me. I've been around a long time. You should not go to law school." The appropriate answer, Mill would say, is, "Well, thank you for your opinion. I'm not asking you to tell me what to do." I can plead with you. I can remonstrate with you. I can try and persuade you, but at the end of the day if you say, "Well, thank you very much, but I'm going to law school," I shouldn't try to coerce you. And not only that, I shouldn't try and get others to put pressure on you, right? It's not only should you not be compelled, but we shouldn't try and coerce you with the moral force of public opinion. We shouldn't start telling lawyer jokes to make you feel bad, right? So we have to respect the autonomy of the individual. Complete opposite, at least going in, from where Bentham starts, right? And one of the things you should see from this, and should be starting to go through your mind, is that there is a deep structural identity between Mill's harm principle and the Pareto principle that we discussed last time. I'll come back to that later, but there's a basic structural identity between those two things. Now, you could say, "Okay, so Mill is saying respect everybody's rights. This is a strong theory of individual rights. Unless somebody's harming somebody else they're to be left alone and the state has to make sure that they're going to be left alone. Fine, that's a theory of rights, but what does this have to do with utilitarianism?" Right? How does Mill get from protecting freedom of the individual through this robust doctrine of individual rights to the notion that we're going to maximize the utility in society? Anyone have any idea? Anyone? Yeah? Yes, sir? Wait for the microphone. Student: Does he argue that freedom leads to the highest level of pleasure? That an individual given that type of freedom would gain utility from that and thus maximizing that type of freedom at the individual level? Prof: But why? Student: Why do we gain utility from freedom? Prof: Yes. In Mill's mind, why does maximizing freedom also maximize utility? I mean, you're right. You're dead right, but there's an intermediate step. Yeah? Okay, over here. Student: According to Mill, only individual people can decide for themselves what makes them happy. Prof: According to Mill only individuals can decide for themselves what makes them happy. That is also correct, but there's still another step in this that I want us to focus on that neither of you has mentioned yet. Yeah? Student: Maybe that individuals, knowing their own desires, will bargain with one another, using their freedom to gain the mutual maximum utility. Prof: Well, that's a very good point, and that is where the identity with the Pareto principle comes in. If you leave people alone, right, they will do what they want with themselves or with others, and so what Mill's harm principle allows in politics is this sort of analog of what we were calling Pareto superior in economics. That's also true, but it's not what I was looking for right now, though there's no reason you should know that because you can't read my mind. But there's another step. Think back to the big picture. The big picture is all these Enlightenment theories are committed both to the freedom of individual and to scientific truth, right? Mill is an Enlightenment thinker of the first order. If you flip through your whole copy of On Liberty you'll see--what is the longest chapter about? What is the longest chapter about? It's probably half the book. Student: Freedom of thought and discussion. Prof: Freedom of thought and discussion, right. Freedom of speech. So why is freedom of speech so important? Because Mill thinks that is the path to the truth. Freedom of speech is the path to the truth. I want to spend a little bit of time on this because it's really important. It's really important for two reasons. The first is you're going to see a very different conception of science informing Mill's work. Remember back to our discussion of the early Enlightenment of Hobbes, of Locke, of Bentham, that truth was equated with certainty. Remember, the seventeenth-century people had this weird root to this because it wasn't what we think of as a priori truths, but things that were a product of wills and all that, right? But the early Enlightenment idea of science is to find certainty, right? Cartesian doubt, remember, is looking for propositions that cannot be doubted, things that can be known with certainty. Mill is a fallibilist. Mill has a much more modern concept of science, the one that you intuitively have, which says, first of all, that all knowledge is corrigible, all propositions have to be evaluated by reference to evidence in the scientific method, and we could always be wrong in our attempts to do that. So a very important move in the history of the philosophy of science, in this regard, was to move away from talking about what philosophers used-- they used the word verificationism, proving that a scientific theory is correct, and instead starting to talk about falsificationism, proving that it hasn't yet been falsified. And so any of you who takes a statistics course in the social sciences will know what you have is a hypothesis, an empirical hypothesis, saying high tax rates lead to inflation. And you go and you'll test it against the evidence. And you'll have some other hypothesis that will be called the competing hypothesis, or the null hypothesis, and all you'll ever be able to say is that your hypothesis hasn't been shown to be false. You'll never know for certain that some other hypothesis couldn't do better, okay? So falsificationism is the idea that would eventually become associated with the philosopher Karl Popper who we don't read in this course. But it's this idea that knowledge claims are corrigible. All of our knowledge claims might be wrong, and the scientific attitude involves recognizing that and acting accordingly. So the mature Enlightenment conception of science means you have to be committed to finding the truth as an ongoing quest, and this is really important for Mill, okay? So freedom of speech is really important for Mill as a path to the truth, as the path to the truth. Now, that's one reason it's important. The second is that Mill injects into a desirable political system, the importance of argument, arguing. And this is going to come up again and again in the course, particularly in the last section when we get to democratic theory. When I say Mill talks about the importance of argument, this is very different from deliberation. It's not the idea that we should all get together, and hold hands, and sing Kumbaya, and see what we can agree about. That's the sort of deliberative ideal, right? Deliberation. Argument is, how many here have seen Prime Minister's Questions on TV, right? That's argument, okay? Or Crossfire, the TV program, that's argument, where people hurl the best criticisms they can come up with against the other side. It's not surprisingly where Mill is often held to be responsible for the metaphor of the competition of ideas. These are two very different models of the role of speech in politics. Just to give you an example of what's at stake here, there's a lot of experimental work that's been done by social psychologists on this question. So suppose there's a field and in the middle of the field there is a cow, and we're all standing around the field looking at the cow. And the question is, what does the cow weigh? And think about two ways of tackling this question. One would be that we all discuss--"What do you think the cow weighs? What do you think it weighs?"--and we eventually reach some agreement upon what we think the cow weighs and we go with that number. The second approach would be to say we don't talk to each other at all. Each of us looks at the cow and makes our own best judgment about what the cow weighs. We add them all up and divide it by the number of people. Which method do you think is more likely to get the weight of the cow accurate? How many people think the deliberative method? Hands up for the deliberative method. Okay, it looks like about a third of you. How many for the non-deliberative additive method? Okay, so you win two-to-one. Well, it turns out you're right. It turns out that the non-deliberative additive method gets the answer right almost exactly, where the deliberative method goes all over the place. Now, there's lots of speculation about why. Now, one reason could be, well, the trouble with the deliberative method is it's going to lead people to listen to strong personalities, or people who think they know more than they do. Leonid over there says, "Look, I grew up on a farm. Don't tell me about cows. I know everything there is to know about cows, what do you people know? And I say that cow weighs 1500 pounds." And then a lot of other people say, "Well, he did. He grew up on a farm. What do I know?" And so maybe opinion gets swayed in that way. That's one possible reason. People may copy what other people say just because they don't know, etcetera. But for whatever reason, and we could speculate, and when we come to talk about deliberative democracy later we'll go into it more. I just wanted to flag this distinction that argument is not deliberation, okay? And so when Mill talks about argument it's rather this idea that everybody makes their own independent judgment. He wants our capacity to make that judgment to be strengthened, but that's not the same thing as deliberation. He wants us to have our own individual robust judgments and trust them, okay? That is his ideal, and we should never ever kowtow to the opinions of others. This is not a deliberative model. And indeed, Mill gives us four reasons for thinking that freedom of speech is important. For one thing he says here--this is the point about fallibilism-- he says if any opinion is compelled to silence, that opinion, for all we might know, might be true. "To deny that is to assume our own infallibility." So science is not about certainty, it's not about faith, right? It's recognizing that whatever we say might be wrong. Secondly, though a silenced opinion be an error, it may, and very commonly does, contain a portion of the truth; and since the general or prevailing opinion on any subject is rarely or never the whole truth, it is only by the collision of adverse opinions (that's Prime Minister's Questions, that is Crossfire, the collision of adverse opinions) that the remainder of the truth has any chance of being supplied. Thirdly, even if the received opinion be not only true, but the whole truth; unless it is suffered to be and actually is, vigorously and earnestly contested, it will, by most of those who receive it, be held in the manner of a prejudice, with little comprehension or feeling of its rational grounds. That is, you don't want to only get the right answer. You want to get the right answer for the right reason. If you copy somebody's math assignment when you can't do the problem you have the right answer, but you haven't got the right answer for the right reason. And not only this, but; fourthly, the meaning of the doctrine itself will be in danger of being lost, or enfeebled, and deprived of all its vital effect on the character and conduct: the dogma of becoming a mere formal profession, inefficacious for good, but cumbering the ground, and preventing the growth of any real and heartfelt conviction from reason or personal experience. So you can say that Mill is, in some ways, what we would think of today as a libertarian. He's got this idea of freedom of speech. We should all be left alone to do as we like without interference from the state except when the state stops others from interfering with us, right, what Nozick will later call the night-watchman state of liberal theory, this negative-freedom, standard libertarian view. On the other hand, he's also a kind of romantic individualist, right? He sees individual human flourishing. Somebody said here, "The path to happiness. Everybody knows their own sources of utility. Nobody can tell you what makes you happy." This is the link to Stevenson we were talking about last time. I can't tell you what should be in your utility function. I don't know. No interpersonal comparisons of utility, that's the link to Pareto. So you can see in all of these fields this move to--it's not mere subjectivism, it's the romantic celebration of subjectivism, right? The full flourishing of your potential can only happen if you are allowed total freedom of speech, of anything you want to do so long as you don't harm others. And this is important not just for your own individual utility function, but also because that's how society learns the truth, and truth is going to be important for the pursuit of utility. You need these tough-minded critics. Whereas for Locke we were all miniature gods who have maker's knowledge about creation, for Mill we're all miniature scientists. We've got to have the critical attitude, and you can't get a critical attitude if you're copying other people's math. You have to be able to defend your reasoning to all comers. You have to stand there like Gordon Brown at question time and have people hurl counter examples at you, not people who are trying to get your agreement, okay? It's the combat of ideas, the clash of ideas. The truth comes out as a by-product of that just as in the invisible hand theory of markets the truth is a by-product, efficiency is a by-product of lots of individual transactions, right? So that is the connection, if you like, between Mill's idea of the importance of each individual getting the truth for themselves and the Pareto principle. In both cases it's an invisible hand explanation, which says as a byproduct of this utilitarian efficiency is maximized. That's why the chapter on freedom of speech is central to this doctrine. Okay, so all well and good, you might say, but how many read to the end, the chapter on applications? It all starts to unravel, it seems, once we get to the chapter on applications. Here Mill says, In many cases, an individual, in pursuing a legitimate object, necessarily and therefore legitimately causes pain or loss to others, or intercepts a good which they had a reasonable hope of attaining. Such opposition of interest between individuals often arise from bad social institutions, but are unavoidable while those institutions last; and some would be unavoidable under any institutions. Whoever succeeds in an overcrowded profession, or in a competitive examination; whoever is preferred to another in any contest for an object which both desire, reaps benefit from the loss of others, from their wasted exertion and disappointment. But it is, by common admission, better for the general interest of mankind, that persons should pursue their objects undeterred by this sort of consequences, in other words, society admits no right, either legal or moral, in the disappointed competitors, to immunity from this kind of suffering; and feels called on to interfere, only when means of success have been employed which it is contrary to the general interest to permit; namely, fraud or treachery, and force. What's the problem with all of that? It's not exactly eloquent, but what's the problem there? Isn't there a problem? Maybe there's no problem. That's called a clue. What's the problem? Yeah? Student: According to this couldn't you reason that something like the Holocaust was okay if it's in the general interest of mankind, or any kind of... Professor Ian Shapiro: Yeah. It's a big problem, right? I mean, didn't he say earlier on that people can't be coerced into accepting results just because the majority believes it? And he went out of his way, he went out of way to say whether it's the actions of the majority or the moral coercion of public opinion, but here he's saying, "But it is by common admission better that we have competitive exams. Of course the people who don't get the job are harmed, but it's too bad." Seems like a contradiction. No? Give another example: Again, trade is a social act. Whoever undertakes to sell any description of goods to the public, does what affects the interest of other persons, and society in general; and thus his conduct, in principle, comes within the jurisdiction of society: accordingly, it was once held to be the duty of governments, in all cases which were considered of importance, to fix prices, and regulate the process of manufacture. But (I love this passive voice) it is now recognized, though not till after a long struggle, that both the cheapness and the good quality of commodities are more effectually provided for by leaving the producers and sellers perfectly free, under the sole check of equal freedom to the buyers for supplying themselves elsewhere. This is the so-called doctrine of Free Trade, which rests on grounds different from, though equally solid with, the principle of individual liberty asserted in this Essay. Same problem, right? It is now recognized, by whom? Why should we believe that? And more importantly, aren't we supposed to be protected from the dominant view, right? So, free trade. We think about the arguments we have today. This is a century later. The arguments we have about outsourcing. Yes, they harm the interests of American workers when they move factories to Mexico, but Mill said, "Yeah, it's true, but free trade's better." It's better from the standpoint of utilitarianism. Big problem, it seems. You think Mill was just actually not that smart, he didn't see this huge contradiction? Sort of, right, the minute you start to apply this doctrine it all just turns to sand? Anyone think there's a way out of this for Mill? Well, people have been struggling with this ever since he wrote it because it does seem to be a big problem, but on the other hand the allure of this rights-utility synthesis is so great that people want to find a way to solve it. And I think this is how Mill thought about this: there's no contradiction at all. I think that Mill thinks in terms of a two-step test. Step one, as you say, of any proposed action, is there going to be harm to somebody else? So, smoking marijuana, or in more, I guess, contextually appropriate at the time, prohibition. This is a case that Mill considered in what you read. If you go to your room and you get paralytically drunk or you get stoned, and you sleep it off, you're not harming anybody. So it's protected. So Mill was a libertarian in that sense, and he opposed prohibition which was a very live issue when he was writing. But there are a lot of activities where it's inevitable that there's going to be harm. Yes, it's true that protectionism harms some people, but any trade regime is going to harm some people, right? So, I'm sorry; free trade harms American workers, but protectionism harms African workers or Indonesian workers, right? Whatever you do for a trade policy somebody's going to be harmed, or whatever system you have for giving away jobs in the civil service, whoever doesn't get the job's going to be harmed. If you have pure competition the people who don't get the highest scores are going to be harmed. If you have job reservation for whites, as they had in South Africa, then blacks are not going to get the jobs. If you have affirmative action to remedy past injustices in the Connecticut Fire Department, then the people who would otherwise have gotten the jobs are going to be harmed as the Supreme Court said last year. So Mill's point is you first make an inquiry. Is there a harm? If the answer's no, the action's self-regarding and it's protected. Free speech doesn't hurt anybody. That's why it's so important to protect it. Indeed, he wants to say the externalities of free speech are positive. Free speech doesn't hurt anybody. Drinking doesn't hurt anybody. Now, some of you might question that. You might say, well, if you go to a bar and you get paralytically drunk, and you then get behind the wheel of a car, and you go home and you kill somebody, drinking does harm. What do you think Mill would say to that? I think Mill would say, "Well, that's a reason to penalize drunk driving, but not drinking," right? So I think that's what he'd say to that. But, so the first step is you ask, is there a harm to others? If the answer's no, it's protected by the harm principle. If the answer's yes, there's a harm to others, then you make a utilitarian calculation as to what's best for society. So if there's a harm to others then you make the utilitarian calculation, and that's why it's important to have good science. Because when you make the utilitarian calculation you want to bring the best scientific knowledge to bear on making that calculation. He doesn't trust majority opinion, right? He wants to say, "Free trade is better than protectionism. We now know that as a matter of economic science," when he was writing. "If somebody could come along and show that there's something other than free trade that would be even better then we would pick that," okay? So it's not the case that he wants to say this is infallible knowledge or known for all time, "But, for the moment, the best scientific judgment, when I am writing this book on liberty," Mill says, "is that free trade maximizes utility." So step one, is there a harm? If no, it's protected. If yes, then you make the utilitarian calculation, and then it's important to have good science behind you, not majority opinion, right? And that's why freedom of speech is the pathway from liberty to utilitarian efficiency. And that's why all good things go together in Mill's account, and we can have this hunky-dory synthesis of rights and utility. Great, right? Now, we're going to go more deeply into this question on Wednesday, whether it is all hunky-dory, because I've said now, well, there's a two-step test for determining harm, and that makes sense, and it makes the apparent contradiction go away, and I think it is the best reconstruction of what Mill wanted to say even though he could have said it more clearly if he had come out and done that. Nonetheless, there's still the question of who gets to decide what counts as a relevant harm. I said you do the first stage, is there harm? But just from the little example I gave of drunken driving and drinking you can see that this might be problematic. One of the things I want you to think about between now and Wednesday, some other examples such as prostitution. Does that involve harm to others or not? Okay, I don't want to answer that now. Just think about it. I want to ask you that.
|
How_We_Teach_5111SC_Principles_of_Chemical_Science
|
Meet_the_Educator.txt
|
CATHERINE DRENNAN: So I'm Cathy Drennan. I'm a professor of chemistry and biology here at MIT. I'm also a professor and investigator with the Howard Hughes Medical Institute, and I've been teaching freshman chemistry here at MIT, particularly a course called 511.1 since I started in 1999. So I've always loved education, and when I started at Vassar College, I went there to study either drama or biopsychology. I mentioned I'm a chemist and a biologist. You're like, what happened there? So I went there and I didn't really know what I wanted to study. And they said, oh well, if you're thinking of anything biology, you have to take chemistry. And I said, oh no. Please don't make me take chemistry. I took it in high school. I can tell you it has absolutely nothing to do with biology. It's deadly, dull, don't make me. Well, I had to take it, and I took it freshman year, first semester, and fell in love with chemistry. And I realized it was about teaching and how one teacher could really make such an enormous difference. I became interested in teaching. Vassar College actually had no research at that point. So my Summers I had to do something else. I couldn't work in the research lab. And so I was a summer camp counselor, and eight through 12-year-olds. And I taught them about botany and all sorts of other things, and I was like, this is great. So I signed up to take education classes. And my advisor in chemistry was like, what now with this thing I'm signing? You're taking this education course? What is this? But I was just absolutely in love with it. And then when it was time for graduation, I thought about applying to grad school, but I wasn't really quite sure. And I wanted to really try out this teaching thing and see what I thought of it. So I got a job teaching high school, and I taught high school at a Quaker boarding school and working hog farm. And you have to understand, I'm from New York. And New York and New Jersey, I can be an hour away from New York City. That was OK. And then I was going to Iowa. So I packed up and moved to Iowa to this Quaker boarding school, and I taught chemistry, biology, and physics and drama. So that came in handy. I did end up taking some drama. And I absolutely loved it. But as a high school teacher, I really wanted to enrich the curriculum. And I was looking for things to say, oh, you need to learn this in chemistry because the big question that scientists are trying to answer right now is this thing. And you need to know this to do that. But I didn't really know what this thing was. So I felt like-- and I tried to get other material to learn more about the world of research. But there just wasn't very much good material out there, especially for chemistry. And so I went to grad school at University of Michigan, and I thought, OK, I'll learn about the world of research and I'll go back and teach high school. And so I just tell people that I got slightly distracted on my pathway to do that. But I ended up at MIT and teaching here is such a wonderful experience. I mean, the students are absolutely incredible. And I've found a home. I can run a research group, but I also can just have a fantastic time teaching. And so that's my unusual path, I guess, to teaching this course at MIT.
|
How_We_Teach_5111SC_Principles_of_Chemical_Science
|
Spotlighting_Contemporary_Chemists.txt
|
CATHERINE DRENNAN: For 5.111, we created this video series, and we called it, Behind the Scenes at MIT. And there are two kinds of videos. One where undergrads, graduate students, post-docs, or another faculty member are talking about how the basic chemical principles we're learning about in the class are used in their laboratory or by them and how those chemical principles will make the world a better place. DARCY WANGER: My name is Darcy Wanger, and I work as a graduate student in the Bawendi lab at MIT. I work with quantum dots in my research. Quantum dots are really, really tiny particles of a semiconductor. People in our lab are working to make quantum dots bind to a tumor. So when a doctor goes in to remove a tumor, they can see, just shining a UV light on it, and see whether it's all gone when they've taken out the tumor. CATHERINE DRENNAN: It was interesting because I talked to some of the students in the class about their sort of perceptions of things. And then, after they had watched the videos, sort of what they thought. And a lot of students actually said, I was wondering. You're learning this, and it's good, it builds character to learn something that could be challenging but wondered, am I ever going to use this for anything? And then they started watching these videos and seeing this, and they're like, oh, yes. This is used all the time. It could be used in my undergraduate research. People are doing this. This is a subject that people are actively learning new things. And I feel like this is a question for a lot of intro classes, because you have these thick textbooks and you sort of feel like everything that could have been learned, it seems like it's all there. Like volume 2. There are three laws of thermodynamics. Are people trying to find a fourth law? What is someone who do chemistry research-- what are they actually doing? Discovering? No, we know what the electrons are already. And so this gave people a sense of what people were using chemistry for now. What were those current questions. And when I was a high school teacher originally, that was really what I wanted. What are people doing now? What are the key questions? If I was studying chemistry, what would I be doing? And I want to help create this material so other people can see that. Because for some people, if you tell them it's hard and it's a challenge to learn, they'll just learn it. And if they don't use it-- it could be like Latin, it builds character, it's fine, it's a dead language but OK, I'm going to get in there. But then for other people, they really want to know that this is going to be useful. And if they're going to really invest in it, they want to know that it's important and they can do things. And MIT students and I think so many people out there want to make the world a better place. There are a lot of really wonderful human beings. And they need the tools, and they want to do something that's important. So I want to create those tools for them to learn this so that they can apply it and do something and they can see the power of chemistry.
|
How_We_Teach_5111SC_Principles_of_Chemical_Science
|
Building_a_Team_of_Teaching_Assistants.txt
|
CATHERINE DRENNAN: One of the big challenges of a large lecture is having the graduate student TAs be really an integral part of this. And I think if the grad students TAs are not that excited about this teaching assignment, it's not so good. You want to have this sense of enthusiasm and energy. So one of the problems I had when I was first starting with this is that this TA assignment was not the most popular. So when the grad students are coming in in their first year, there are organic chemistry grad students and they want to teach organic chemistry. And there are physical chemistry students who want to teach thermo and kinetics. Biochem students want to teach biochemistry, and inorganic chemistry students want to teach inorganic chemistry. So who is there for general chemistry? And so often, it would be the booby prize assignment to get stuck in that class, and there might be a feeling like, oh, I got stuck because I wasn't the organic chemistry professor's first choice. So I wanted to change all that because to me, teaching freshmen in freshman chemistry is the most fun teaching ever. So I knew there were grad students out there that could engage and agree with me that this is going to be a lot of fun. I admit, it's more work than all of those other courses, but it also can be more fun. So I thought, we just need to get the right people in and I need to educate them about what this experience can be like. So I decided to once the students were accepted to come into MIT, I would send them an information packet about how they could be part of this exciting teaching opportunity, and I made it glossy and pretty and sent it to them and asked them to apply to be part of the class. And so we got some wonderful applications. I got a great group of people to be part of this class. So instead of being like, oh no, I have to do this, they're like, yes, freshman chemistry. And we brought them in early, and I got to know the group. We had extra TA training in the beginning. And t-shirts, and by the time we started, everyone knew each other. And it was an opportunity, also, for the TAs to make friends because they're first year graduate students and so they get to know other first year grad students and they build a sense of team as well. So for many years now when we've been really doing this-- having this extra TA training and applications to be part of the class-- we've started the semester with a group of people who are just ready to go and really excited about being part of that. And I think that has had such a wonderful impact on the course as well. They see this whole teaching team that really want to be there and are excited about this opportunity and care about each and every one of those students. And the comments that I've gotten later, both from the students and also sometimes from parents, that their child in their freshman year really felt a connection as part of this class and had this TA just checking in on them, not just about this class, but about how they were doing overall. And it was a support system that went beyond the teaching of chemistry, which made me really very happy.
|
How_We_Teach_5111SC_Principles_of_Chemical_Science
|
Using_Humor_to_Engage_Students.txt
|
CATHERINE DRENNAN: I of course want my students to be as excited about chemistry as I am. And, you know, I feel like MIT is a relatively serious place. But the MIT students are really fun people. And they're willing to make fun of themselves and be a little geeky. And I love that. And so whenever I can kind of encourage something like that in class, I always try to take advantage of it. And one of the things I like to ask them is sort of if one geek is going to propose marriage to another geek, do you really give them a diamond? Or maybe graphite. Because, you know, one is more kinetically stable and one's more thermodynamically stable. And what's the best gift? And that often-- then they can discuss which they think. And then I'm like, actually, neither because what you should give them is the Green Lantern ring. And then someone can say, what about Lord of the Rings ring, you know. And so you have-- you can sort of bring in these [? funs, ?] but, you know, your arguments are kind of based on chemical principles rather than other things. So if you can really bring in sort of the fun humor that MIT students enjoy, and they're always with me on that, and I love that part. So I'm always looking for ways to kind of have something be a little bit more fun, especially in some of-- and some of the units, I feel, really lend themselves to these fun examples. And other ones are a bit more challenging. But I want to make sure that everyone has something in. And if I find something online or whatever, I really try to use that in my class. And I found these videos of dogs teaching chemistry, which at least some of the students love. There's always a few MIT students who are like, I wish you would just stick to the facts and straight derivations. But for the most part, I think, attendance would suggest that students come and that they're engaged and that they're seeing these little things. And I've discovered that it really helps people remember, when you do something a little bit different. One of my favorite things to do is when I teach about buffers, I don't know why, so many students have trouble with the concept of buffering, acid base titrations. It's just really hard for a number of people. So I thought over the years of ways that I can really get them to remember certain things. And some of it, I can make fun. Others is have to figure out how to just make them memorize certain things. And really just get them there where they can just do this off the top of their head. And so when I talk about a buffer, I always want to make the point that you need in your [INAUDIBLE] buffer, you need something that's a weak acid, and the conjugate of that is a conjugate weak base. And you can't just have an acid, or just have a base. That won't buffer. Because you need something. If you add acid, you need something that will buffer it to keep the pH constant. If you add base, you need something that will-- so it has to go both ways. So I like to dress up. And I have the abbreviation for a weak acid taped on the front of my shirt. And the abbreviation for the weak base taped on the back of my shirt. And I said, I am a buffer. And I'm an acid, and then I turn around, I'm a base, and I turn around again. I'm an acid. And I just twirl around the classroom. And I said, I want this engraved in your brain that a buffer has to have both a weak acid and a weak base. And if I keep twirling like this, you will try to purge this memory from your brain of Professor Drennan twirling around the classroom as an acid and a base, but you won't be able to. It'll be with you for the rest of your life. You'll be on your deathbed going please, can the image of Professor Drennan twirling around in that silly t-shirt that she made with lab tape, please, can it leave my brain now? But no, no. It will be with you forever. And when I do that, the students do better on the acid base problems. So I am OK with really embarrassing myself. Whatever it takes, I go that extra distance because I want them to remember that.
|
How_We_Teach_5111SC_Principles_of_Chemical_Science
|
5111_A_Space_to_Discover_Your_Passion_for_Chemistry.txt
|
CATHERINE DRENNAN: So at MIT, there are several options of chemistry courses. Everyone has to place out of chemistry or take one semester, and even if you're going to be an economics major, you need that semester of chemistry, and there are several options. And this particular course is designed that if you don't have a very strong background in chemistry or maybe if you do but maybe it's not your favorite subject, that this is one you can come in and take even if you haven't taken chemistry before. And the backgrounds are actually quite diverse. There are absolutely students who have never taken it before. There are other students who have taken two years who probably could have taken a more advanced class, but they thought this one sounded like it would be more fun. So there's quite a bit of mixture in terms of background. A lot of people coming in are not that excited about chemistry, and this was one of my tremendous surprises when I taught here. I realized in high school not everybody is jumping at the bit to take the chemistry class, but at MIT, these are scientists and engineers. Everyone walking through the door should be in love with chemistry, and I discovered that absolutely was not the case and that it was my job to help convince them, these people who were generally interested in science and engineering, that chemistry was important. And I just thought it was obvious, but then I had to remember back to my experience, and it wasn't obvious to me the value of chemistry the first time I stepped in a chemistry course. So on the first day of class, I like to show a picture from my college yearbook, a picture of myself and college yearbook picture of one of my classmates who is Lisa Kudrow, who is Phoebe on Friends, and I put both of our pictures up. We were in the same class, and I asked them to guess who we are. They usually get it. And then I ask them to guess what I went to college to study. They guessed chemistry. I tell them drama. I ask them to guess about Lisa Kudrow, and they guessed drama, but it was actually biology, and she was a biology major at Vassar. And so then I said, how did this happen? How did I become a chemist and she become an actress? And I let them know this because I said, you may have come to MIT to study something, but that might not be what you end up studying. You might be taking this class because it's required, but that's just because you have not found your passion for chemistry yet, and you will find it. Maybe you'll find it this semester. Maybe it will be next year at MIT. Maybe it won't be for years until you're an engineer and all of a sudden you're working with chemists and you're like, oh, man, I really need to understand chemistry to do my engineering job better, and then that's when you fully appreciate the value of chemistry. But I hope that if you haven't found your passion yet, you will find it this semester, and I'm going to try to help you understand why chemistry is so amazing and how it can affect all sorts of different disciplines. And so if you really embrace it and look at it, you realize there's just wealth of information that is in there. And if you get these main tools that we're going to cover-- I'm going to teach you really all the basics that you need to know-- if you can get those, you can go on and do all sorts of things with that chemistry. And I have a 7-year-old daughter, and the world is a pretty scary place right now. There are a lot of problems associated with it, and we need people who understand chemistry to make the world better and to make the world better for my daughter. So this is a very personal exercise for me. I want everyone to come out of the class with this background in chemistry that they can save the world, because the world needs saving and my daughter is really cute, and I'd like to show a picture of her and say, make this world a better place for her.
|
How_We_Teach_5111SC_Principles_of_Chemical_Science
|
Clicker_Competitions.txt
|
CATHERINE DRENNAN: We started using these clicker devices in 511.1, and clickers are there small credit card sized devices where I can put a question up on my computer that the students will see, and it will have three, four, or five choices. And the students can look at the question and click in what they think the answer is. And this was a big trend, I think, in education for big classrooms because it allows the students to engage more. The teacher, who's standing in the front of the room with 300 people, they're not necessarily going to raise their hand and go, I'm confused, because it's a big class. But if I ask them a question to see if they're confused, they'll click in. I'll go, oh yeah, that did not go the way I wanted. Clearly people are confused. Let's talk about this some more. Or 95% get it right, and it's like, OK, I can move on and go on to the next thing. So it's a great, I think, teaching tool. Everyone should be clicking in to the clicker question. So does someone want to explain why this is the correct answer? But when we first brought them in, the students were not so excited by them. And there were other classes at MIT that were using them, but it was like, OK, we'll do this. But I didn't get that there was a lot of enthusiasm for them. So I tried to think about ways that we could make it more fun. And so one thing that I thought is that we could have some competition. And in 511.1, there are multiple recitations that are run by a graduate student TA, so everyone in the class would have their recitation group of 20 people. And so I thought, well, on Friday in class, we'll have a little competition and we'll see which recitation gets the most clicker questions right. And they will get a reward, which is maybe some donuts at recitation next week. Something small. Some kind of food. And largely, it was bragging rights. Our recitation got the most questions right. So we started doing this, and then I surveyed the students both before we had done the competitions and after to kind of look at whether the students liked the competitions and felt like they were learning. And I found a really big difference. So students had always said, yes, I try to answer the questions correctly. But then when we asked them questions about how hard they wanted to work to understand chemistry and their real desire to learn and the impact of the course, that with the competitions, all of these things were much higher. And so it just seemed like they were more engaged. There was more of a buy in, and I would talk to some of these students. And they were like, oh yeah, I'm in Jay's recitation. And they felt a part of something. They had ownership. And a lot of the groups in the recitation, they would start studying together. Even though they didn't know each other before and this is often freshman year, so these might be some of the first people they're meeting in their classes. They got very serious about the competition some years, so they would meet the night before class to review some things so they were ready for the competition. And I've seen students who've been doing this for a number of years in their junior year, and they're like, oh yeah, I get together with my friends from the clicker competition every month or so. We go have dinner or do something fun. So they're making friends for life. And so I think that this is just in a big 300 person class, feeling like you belong, you're an active participant rather than a passive one. These clicker competitions built this sense of team.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Extremal_Set_Theory_Sperners_Theorem.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: In this video, we'll look at an application of the probabilistic method to extremal set theory. Extremal set theory is an area of combinatorics concerned with questions such as, what is the largest collection of sets that satisfy certain desired properties? And the question that we'll look at in this video is the following. What is the largest antichain? And more precisely, what is the largest collection of subsets of some ground set numbers 1 through n-- so this is an n element set. So I want as many sets as we can such that none of these sets is a subset of another set. So this is what we mean by an antichain. And for a given value of n, what is the largest number of sets you can have? What is the largest l that we can have as a function of n? To give an example, when n equals to 3, you can have the sets 1, 2; 1, 3; and 2, 3; and you see that none of these sets is a superset of another. More generally, when you take all k-element subsets of 1 through n, this collection also has the property that no set is a subset of another set because all these k-element subsets have the same size, so none of them can contain another. Now this collection of sets have n choose k sets. And which value of k maximizes this quantity n choose k? Well, it is maximized at k being n over 2, or when n is odd, we can either round down or round up, leading to n choose k over 2, or the floor of k over 2-- so this is-- so n choose n over 2. So n over 2 floor, meaning that's rounded down or, equivalently, rounded up, will produce the same answer. So this many sets we can get from an antichain. Now this is just one example. And the question is, is this the best example? Can we do better? Can we get an even larger collection of sets that form an antichain? And the classic result of Sperner's-- or Sperner's theorem, such that-- actually, no, this is the best that we can do. And what that theorem says is that if you have l subsets of 1 through n such that none of the sets is a subset of another, then, in fact, l is, at most, n choose the floor of n over 2. So this answers the question that we were asking. And in this video, we'll use the probabilistic method to prove Sperner's theorem. And it's quite a beautiful proof. And the introduction of randomness into the proof is quite remarkable for a theorem that really doesn't involve any randomness at all. What we will prove is actually the following theorem, which is slightly stronger than Sperner's theorem. And we'll prove the following, also known as the LYM inequality, named after three people who discovered the theorem. And it says that in the hypothesis that we just set up, we furthermore have the following consequence. If you take the sum over I from 1i-- i being 1 to l of the quantity 1 over n choose the size of Ai, then this sum is, at most, 1. So let us first prove this LYM inequality, and then afterwards, we'll deduce Sperner's theorem from this LYM inequality. Let's now introduce randomness into the problem. So let sigma 1 through sigma n be a randomly chosen permutation. of numbers 1 through n. And here, we'll choose this permutation uniformly at random, meaning that of all the n factorial different permutations, we'll select one with equal probability of being any one of those n factorial permutations. And now let's consider the following chain of sets, starting with the antiset, and then the set containing just the first element of this permutation, sigma 1, and then the set containing the first two elements of this permutation, sigma 1, sigma 2, and so on, until we add all the elements into the set. So these are the set of prefixes of this permutation where we add one element at a time. So this is a chain of subsets of 1 through n. And let's consider the event for which we'll evaluate its probability. So the event that Ai-- so Ai is one of the sets that was originally given in the in the theorem statement. So what's the event? So the event that we'll consider is the event that Ai appears in this chain. So this chain means this chain over here. Now, this chain is random because the sigmas are a random permutation. Even though the Ai's are deterministic, the Ai's are given, this permutation is random. So this is some event which has a probability. When do we have Ai appearing in this chain? Or if all the elements of AI appear in this permutation before all the non-elements of Ai. So we can count how many different ways this can happen. So if all the n factorial are different permutations, the number of permutations where the elements of Ai appear first is Ai size factorial, and then all the non-elements of Ai appear afterwards. It's n minus the size of Ai factorial. And this quantity here is equal to 1 over the binomial coefficient n choose the size of Ai. OK. Next, let us note the following. That no two different Ai's, Aj's can simultaneously appear in the chain. Why is that? Well, we assumed that no Ai contains is a subset of another Aj, and if, indeed, you had Ai and Aj both appearing in the chain, then one of them would contain another. So this chain cannot contain two different Ai's at the same time. And therefore, these events, meaning these events over here, are disjoint as we run i from 1 through l. If these events are disjoint and each of them have this probability, well, the sum of these probabilities then must add up to, at most, 1. And that's indeed the conclusion that we're looking for. So the sum of these probabilities, the probability that Ai appears in the chain, in this random chain, the sum of these probabilities add up to, at most, 1 because these events are disjoint from each other. And we have calculated earlier that the event probabilities are given as 1 over n choose the size of Ai. So this concludes the proof of the LYM inequality. You see the final line, the final inequality is precisely what we were trying to establish. And finally, let us prove Sperner's theorem by deducing this theorem from the LYM inequality. So here is a-- this is a quick deduction because n choose the size of Ai is, at most, n choose the floor of n over 2 for all i. So the binomial coefficients for a given n maxes out in the middle, we have this inequality. And thus-- here, I'm just rewriting the LYM inequality. The sum from-- so i from one to l of 1 over n choose size of Ai. This is, at most, 1, so that's what we just proved. And then applying this individual bound to each term, we get l different terms, and each one of them in the denominator, we have n choose the floor of n over 2. And rearranging this inequality, we get the desired upper bound on l. So this concludes the proof of Sperner's theorem. And it's a beautiful result with a beautiful proof that is a wonderful illustration of the probabilistic method in combinatorics where, although the theorem statement itself is deterministic-- doesn't involve any randomness, the proof works by introducing randomness into the problem. So this is the application of the probabilistic method in combinatorics, and I think it's one of the most beautiful applications.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Existence_of_Graphs_with_High_Girth_and_High_Chromatic_Number.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: In this video, we'll look at a beautiful result that provides us with the existence of graphs with high girth and high chromatic number. And it's a very nice demonstration of the power of the probabilistic method. We'll recall some definitions. Given a graph, G, its chromatic number is the minimum number of colors we need to color the vertices of the graph G so that no edge has both endpoints receiving the same color. The girth of a graph G is the length of the shortest cycle. For example, the following graph has girth 4 because the shortest cycle in this graph has length 4. And there are a couple of them. Another example-- so this graph here, there are many cycles. But the shortest one has length 3. It is a triangle up here. So the girth is 3. If the graph is a tree, then it has no cycles, in which case we say that its girth is infinite. OK, so what can we say about girth and chromatic number? Well, suppose your graph contains a clique on k vertices. Then just to color this clique, we need at least k different colors. So then the chromatic number is at least k. Can we say something in the reverse direction? Suppose we have a graph with high chromatic number. Would it be possible to deduce some local information about the graph? For example, would it be possible to say that if we have very high chromatic number, then there must be a triangle somewhere in the graph, or there's some fairly dense structure that's very local in this graph. So the main theorem that we'll present in this video gives a definitive no to this question. And it is a classic result due to Paul Erdos from the '50s. And the statement of the theorem is the following. For any positive integers k and l, there exists a graph. There exists a graph with girth l, girth bigger than l, and chromatic number bigger than k. OK, so let's pause for a moment and think about what this theorem is saying. So given any k and l, no matter how large, we can find a graph. There exists a graph for which there are no short cycles, no cycles of length at most l. And furthermore, to properly color the vertices of the graph, you need at least one. You need strictly more than k colors. So even though this graph has very high chromatic number, around every vertex, if you only look not too far away, then the graph locally looks like a tree because there are no short cycles. So in particular, having high chromatic number does not give you local information. And Erdos shows that there always exists counterexamples to such a statement if we want some local information from having high chromatic number. OK, so the rest of this video will concern the proof of Erdos' theorem. It is quite a miraculous result. And certainly before Erdos proved this result, there were other results in graph theory that constructed very explicit graphs that have some large chromatic number and girth, but not for all values of k and l. And those constructions are, in some sense, very hands on. It tells you exactly how to construct these graphs. But they were not able to get arbitrarily large girth and chromatic number. Eldredge's insight is to use randomness, use the probabilistic method, and show that by appropriately modifying a random graph, one can achieve the desired outcome. So let's see how this works. We'll begin by taking a random graph, GNP. So this is the Erdos Renyi random graph. And there are n vertices. And the edge probability is p, meaning that we flip a probability p coin for each possible edge and put down an edge independently with probability p between every pair of vertices. So let G be drawn from this distribution. So G is this random graph. Now let's take a specific choice of p being locked and squared over n. Turns out, other choices of p work as long as p falls in some range. But for concreteness, we'll take this very specific choice of p. So this is a random graph. And let's construct the random variable x defined to be the number of cycles of length at most l and G. These are short cycles in G. And these are the cycles that we want to avoid because we want to construct a graph with girth strictly larger than l. So we really do not like these cycles. It would be nice if there were not too many such cycles. So let's compute the expected value of x. For this computation, we use linearity of expectations. And look over all possible cycle lengths from 3 to l. So I is the target cycle length. And for cycles of length I, there are n choose I different choices of which I vertices the cycles can involve. And once you specify those vertices, they are I minus 1 factorial, different permutations, circular permutations of these vertices. And we divide it by two because a cycle could be counted in two different directions. And they will correspond to the same cycle. So this quantity here is the number of cycles of length I in the complete graph on n vertices with all edges present. For each such possible cycle, the probability that the cycle appears in G is p to the I. So the second term here is probability that this cycle appears in GNP. OK, this is some expression. It's a little complicated, but we can simplify it and do some approximations to upper bound this quantity. So you see that the n choose I, if you expand it in terms of factorials, one can upper bound this quantity by n to the I, so incorporating the first two factors, and then leaving p to the I factor intact. OK, we chose p so that p times n equals 2 log squared of n. And so you can rewrite this quantity here like this. OK, there are l terms here. So let me furthermore upper bound to this expression by l times log n to the 2l. Here, l is a constant. So this quantity here is some growth and the rate of some power of a log and certainly much slower growing compared to the function n. So it's little o of n. It's a fairly crude approximation, but it will be sufficient for our purposes. So this is a calculation. Conclusion can be read as the expected number of short cycles is fairly small. Well, if there are few short cycles, we can get rid of all of them to get a graph without short cycles. So this is the basic idea of the alteration method in the probabilistic method. So start with this GNP, this random graph. It's not going to get us what we want right away, but we're going to do something to fix the defects in this graph-- in this case, get rid of short cycles. To that end, let us know that by Markov's inequality, the probability that the number of short cycles-- so short means length at most l-- the probability that the number of short cycles x is at least n over two is upper bounded by the expectation of x divided by n over two. And because expectation of x is little o of n, this final quantity is little o of 1 in the case to 0 as n goes to infinity. OK, so this is a good sign. So it means that, typically, the graph does not have very many short cycles, has fewer than n over 2. And then eventually, we'll be able to get rid of one edge from each cycle to get us a graph with still fairly large number of vertices. But there are no short cycles left in this after the alteration. Now, let's think about chromatic number. So we want to ensure that the graph that we end up having has high chromatic number. To that end, we'll recall the following fact. If we give you a graph, G, then the chromatic number of G-- so this is chi of G. So that's the chromatic number. Well, the chromatic number is the minimum number of colors that one needs to color the vertices of the graph so that no edge has the same color on both sides. In particular every color class-- so if you look at a single color like red, and look at the vertices colored by that color, these vertices have no edges directly between them, and so they form an independent set. So each color class is an independent set. And so the number of color classes you need, then, should be at least the number of vertices divided by the independence number. This is the size of the largest independent set. So this alpha of G. The independence number is the size of the largest independent set. Indeed, each color class is an independent set. And each independent set has size at most alpha of G. And therefore, the number of colors you need should be at least the number of vertices divided by the independence number. OK, now for every h, the probability that the independence number-- so after here, we see that it makes sense to think about the independence number. So the probability that independence number is bigger than h is at most the following. So here we'll use union bound. For each h vertex subset, of which there are n choose h, such h vertex subsets, the probability that those h vertices form an independent set is the probability that there are no edges appearing among these h vertices. And that's 1 minus p raised to the power h choose 2. So this is some expression. And we can do some manipulations to simplify this expression, yielding an upper bound of n to the h times. So the latter expression, we can simplify it as upper bounded by e to the minus ph, h minus 1 over 2. So 1 minus p is less than e to the minus p. Simplifying even further, we see that this expression can be rewritten this way. And now let's make a choice for what age I want to set. So let's set age to be the quantity, which, for now seems a little mysterious. But it turns out to be a natural thing to set after we see what the calculation looks like. So we set age to be this quantity, 3 times log n over p. And knowing what p was from earlier, this quantity is 3 n over log n. If we set h to be this quantity here, then you see that here, in this expression, the numerator here can be simplified. And the result is that this expression decays according to some rate, which is n to something that is a minus constant, something on a constant word. In particular, it goes to 0 as n goes to infinity. And that's really the only thing that we need, that we need out of this calculation. So if we set h to be of an appropriate quantity, then typically, one does not have independent sets of size, at least h in the graph. OK, so let's regroup. So we proved a couple of things. First, we showed that typically-- so when I say typically, I mean with probability approaching 1 as n goes to infinity. So typically, this graph does not have short cycles, meaning cycles of length at most l. And also, typically, this graph does not have large independent sets. OK, now let us put everything together. So by choosing n sufficiently large, we can simultaneously ensure that the probability of x, the number of short cycles, exceeds n over 2, is strictly less than 1/2. And the probability that the independence number exceeds h is also strictly less than half. So both of these are quantities that decay to 0 as n goes to infinity. So by choosing sufficiently r, h, n, we can make sure that both probabilities are strictly less than 1/2. And therefore, there is some possibility that lie outside both events. Thus, there exists a graph G with fewer than n over 2 cycles of length at most l. And furthermore, its independence number, the size of the largest independent set, is, at most, h, which we set earlier to be 3 n over log n. OK, so we have such a graph G that has these nice properties. Well, we're looking for a graph with high girth. Here, G has not too many short cycles. So we can get rid of one vertex from each short cycle to remove all the short cycles, so remove a vertex from each short cycle, each cycle of length at most l. So remove one vertex from each such cycle. And doing so results in a subgraph in G prime. And then we know that the girth of G prime is strictly larger than l, because we got rid of all the short cycles. And furthermore, the chromatic number of g prime is by what we saw earlier, at least the number of vertices of g prime divided by its independence number. While the number of vertices is at least n over 2 because we removed at most n over 2 vertices from G to obtain G prime. The independence number of G prime-- well, the independence number of G prime is, at most, the independence number of G because any independent set in G prime is automatically an independent set in G. And here we saw that the independence number of G prime is at most h, which is 3 log n over 3 n over log n. So we have, in the denominator, 3 n over log n. And whole expression simplifies to log n over 6, which is bigger than the constant that we were given, k, as long as n is sufficiently large. And thus, G prime satisfies the desired requirements. This finishes the proof of this theorem of Erdos that there exists graphs with arbitrarily large girth and arbitrarily large chromatic number. To review some of the key ideas in this proof, we use the probabilistic method with alterations to construct first a random graph with various parameters so that this random graph typically has very few short cycles so that we can then remove one vertex from each short cycle and get rid of all the short cycles. And this is a way to ensure large girth. To obtain high chromatic number, we note that this graph typically has-- typically does not have large independent sets. And so its independence number is typically small. And therefore, the independence number of the subgraph must be typically small, as well. And having small independent number implies high chromatic number. OK, so it's a beautiful illustration of the probabilistic method with alterations that allows us to deduce this highly counterintuitive result, which was quite surprising because previously, various researchers have tried to construct such examples of graphs by hand explicitly, but unsuccessfully. And Erdos brought this beautiful insight that introducing randomness by looking at random graphs, one can, indeed, prove the existence of graphs with highly counterintuitive properties.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Markov_Chebyshev_and_Chernoff.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: In this video, we'll look at three basic yet important inequalities in probability-- Markov's inequality, Chebyshev's inequality, and the Chernoff bound. Markov's inequality says that if we are given X, a real valued non-negative random variable, then for every positive number lambda we have the following inequality-- the probability that X is bigger-- is at least lambda, this probability is no more than the expectation of X divided by lambda. One way to interpret this inequality is that if X is a non-negative random variable with small expectation, then it is unlikely for X to be very high. This is a very important and useful inequality. And let us prove it. The proof is quite short. We can start with the expectation of X. And rewrite this in the following way. It is at least the expectation of X times the indicator function corresponding to when X is at least lambda. So this means it is 1 if X is at least lambda and 0 otherwise. Now, claim next that the expression inside the expectation is at least lambda times 1 sub X being at least lambda, because when X is at least lambda, well, X is at least a lambda. Otherwise, both sides are 0. So this inequality holds as well. And finally, we can pull out the lambda and get lambda times the probability that X is at least lambda. So this finishes the proof of Markov's inequality. Next, let's move to Chebyshev's inequality. The statement of Chebyshev's inequality is that if we're given a real random variable X, then for every positive real number lambda, the probability that X deviates from its mean by at least lambda times the square root of the variance of X-- so this probability-- is at most 1 over lambda squared. Let me remind you that the variance of the random variable X is the quantity which is defined to be the expectation of X minus the expectation of X squared. And it is also equal to the expectation of X squared minus the square of the expectation of X. So this is the variance of the random variable X. Yeah. So intuitively, what Chebyshev's inequality tells us is that if a random variable has small variance, then it is unlikely to deviate too far from its mean. Let us now prove this inequality. The left-hand side, namely this probability, we can rewrite it by squaring the inequality in this expression, which we-- doing this gets us the following. On the left-hand side, we get X minus the expectation of X squared. On the right-hand side, we get lambda times lambda squared times the variance of X. And now let us apply Markov's inequality. So apply what we saw earlier, apply Markov, to get the expectation of X minus the expectation of X squared divided by lambda squared, variance of X. But you see, from the definition of variance, the expression circled in red here is equal to 1. And therefore, the right-hand side is equal to 1 over lambda squared. So that finishes the proof of Chebyshev's inequality. The third inequality that we'll look at is known as the Chernoff bound. And unlike the earlier inequalities, which is for fairly general random variables, the Chernoff bound is for a more specific setting, namely a sum of independent random variables. And more specifically, let us look at the case where the random variable S sub n is a sum of n different plus minus 1's. So each Xi is plus 1 with probability 1/2 and minus 1 with probability 1/2, all chosen independently at random. So one way to interpret Sn is that we walk on the number line, take n steps. Each step, we flip a coin and whether walk either 1 step to the right or 1 step to the left and walk n steps. The conclusion of Chernoff bound is that for every positive lambda, the probability that S sub n is at least lambda times root n is at most this quantity here, which is e to the minus lambda squared over 2. So in other words, let S sub n-- so where we end up in this walk-- cannot drift too far away from the origin. And too far here means some large multiple of root n. It is worth noting that root n here is the variance of-- so the square root of the variance of S sub n. So if we simply apply Chebyshev's inequality to S sub n, we would arrive at the following conclusion. So Chebyshev tells us that the probability that Sn is at least lambda is at most 1 over lambda squared, which is already an interesting bound. But the Chernoff bound gives a much stronger conclusion. So the right-hand side decreases very rapidly as a function of lambda compared to the Chebyshev bound, which decreases only at the rate of 1 over lambda squared. Let us now prove Chebyshev-- the Chebyshev bound. For the proof, we'll need to introduce a new idea. And this is the idea of a moment-generating function. So let t be a non-negative real number. And let us consider the moment-generating function, in this case given as follows. Consider the expectation of the quantity e to the t times S sub n. So instead of considering, for example, the expectation of S sub n, let's consider the exponential applied to the random variable. So this is the expectation is over the randomness in S sub n. Well, let us now rewrite this expression by expanding the definition of S sub n. And we get the following. And the next step is where we crucially use the fact that the Xi's are independent random variables, which then allows us to split this expectation as a product of individual expectations. Let's look at one of these terms, one of these expectation factors. Well, X sub 1 is plus 1 with probability 1/2 and minus 1 with probability 1/2. So this factor here equals to e to the t plus e to the minus t over 2. And likewise with all the other terms. So the right-hand side equals to this quantity raised to the power n. So this exactly computes the moment-generating function of this random variable S sub n. Let us now try to manipulate this right-hand side expression to make it easier to work with. So starting with this expression, for now temporarily dropping the power of n, so just look at what's inside the parentheses, we can apply a Taylor series expansion. So by Taylor series, I'm going to do the steps slightly on the quick side, but I encourage you to work it out yourself if you can follow this step. Basically, we apply the Taylor series expansion and note that the odd index terms all cancel each other out. So again, I encourage you to try out this calculation yourself. We get the following. On the other hand, let's look at the expression e to the t squared over 2. So we'll see where this comes from in a second. So this X should be a t. So let's look at this e to the t squared over 2. By using Taylor expansion over here, we get the following. X of each term is t to the 2k over k factorial, 2 to the k. So this is by writing out the Taylor series expansion for both of these expressions. And finally, let's note that even on a term-by-term basis, one has this inequality by comparing what happens in the denominators. So I'll leave this as an exercise. But this is not too hard to see. This allows us then to rewrite the previous line by upper bounding this moment-generating function by the expression e to the nt squared over 2. So that's what we just deduced. Finally, let us use Markov's inequality to upper bound the probability that S sub n is at least lambda times root n. So that's what we're trying to bound. As with earlier, well, previously we squared both sides. But now let's take an exponential of both sides. So for this step, actually, I think I will need t to be strictly positive for this step to hold. So let me change the hypothesis on t. So this is true. And then we can apply Markov to upper bound this probability by the expectation of this expression e to the t sub n, which is random, and then divided by the right-hand side, which is a constant. And earlier, we had determined an upper bound for the moment-generating function. And plugging everything in, we get this following expression-- e to the nt squared over 2 minus t lambda square root n. So this expression, this inequality, is true for every positive value of t. And the next step then is to use a value of t that works most in our favor. And one can do this by trying to minimize the expression on the right-hand side. And you can do this. And turns out that the optimal t to set is lambda over root n. So you can determine the optimal value of t by taking the derivative. And this is the optimal value. And set t into this bound gives the desired conclusion. So if you set this value of t into this right-hand side bound, you will see the right-hand side of Chernoff bound popping out. So that's the proof of the Chernoff bound. So the key idea introduced here is the moment-generating function. And because of the independence of the components of S sub n, we can factorize the moment-generating function and, through some additional algebraic manipulations, derive the desired bound. So all three of these inequalities are basic and powerful techniques in probability. The Chernoff bound in particular gives an extraordinarily good bound on the probability that a sum of independent random variables can deviate from its expectation. And over here, although we only looked at the case when Xi is plus 1 or minus 1, each with probability 1/2, the same proof can be applied in a more versatile way to other settings, as long as you have a sum of independent random variables.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Extremal_Set_Theory_Intersecting_Families.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: Let us look at an application of the probabilistic method to extremal set theory. The extremal set theory concerns the study of families of sets that have certain desirable properties, and asking questions such as what is the largest set family that you can have with such families-- with such properties? And in this video, we'll focus on the following property of being an intersecting family. This means a collection of sets, A1 through Al, such that these sets pairwise intersect in a nonempty intersection. So we want sets A1 through Al, such that whenever you take any two of these sets and take their intersection, it is never the empty set. So here are two basic questions that one can ask. The first one, which will turn out to be an easy warm up, is what is the largest intersecting family of subsets of 1 through n. So here, there are, in total, 2 to the n subsets of 1 through n. But if I want to only keep a collection of sets that's an intersecting family, what is the largest family I can get? And the second question, which will turn out to be far more interesting, will involve an application of the probabilistic method, is a question, what is the largest intersecting family of K element subsets of 1 through n? So here, n and K are inputs to the question. So they are given. And we're asked to find what is the largest intersecting family of K element subsets of 1 through n. All right. So the first question, as I mentioned earlier, turns out to be a fairly easy one, and we'll solve it as a warm. So as an example, just to illustrate what we're talking about, if we look at all sets containing the element 1. Well, this collection of sets is intersecting because they all have the element 1. So every pair intersects in some set that contains the element 1 in particular non-empty. This collection of sets has size 2 to the n minus 1, because once you fix the element 1, the other elements can be in or not in the sets, and there are 2 to the n minus 1 such possibilities. So a binary choice for each of the elements outside 1. OK. Now this is just an example. But it turns out this is the best that we can do, because-- so the claim's that we cannot do better, or we cannot get an even larger set. And this is because for every subset A of 1 through n, and most 1 of A and the complement of A, so the elements 1 through n subtract A, so the complement of A. And most one of these can be in the intersecting family. So what we're doing here is pairing up the sets with their complements. And because it's an intersecting family, you cannot simultaneously have a set and its complement in this intersecting family. And so this eliminates-- so this means that you can have at most half of all the 2 to the n sets, thereby showing that this 2 to the n minus 1 is indeed the best that you can do. OK. So this finishes part one, question one. And now let's move on to the more interesting and harder question two. Again, let's start with some examples. First, here's an easy case. If n is less than 2K, then-- so this is the easy setting. If n is less than 2K then by pigeonhole principle, any pair of K element subsets of 1 through n intersect. So we can take all and choose K sets, and they form an intersecting family. So there's nothing, really, to do here, because we can just take all the K element sets. So the harder part of the question, the more interesting part, is what happens when n is at least 2K? Here, as an example, we can take as earlier all sets containing the element 1. And this gives us n minus 1 choose K minus 1 sets. And that's before the sets, they all-- this set form-- this collection of sets form an intersecting family because any pair their intersection contains the element 1, and is thus nonempty. So this is the construction and gives you n minus 1 choose K minus 1 sets. But is this the best you can do? Perhaps through some other methods of construction, through other examples, maybe there are even larger intersecting families of K element subsets. That turns out not to be the case. And that is the main theorem that we'll prove in this video. And this result is known as the Erdos-Ko-Rado theorem. So this is a seminal and beautiful result in extremal set theory. And The Erdos-Ko-Rado theorem says that if n is at least 2K, then any intersecting family, and we'll give this family a name, Curly F, of subsets of 1 through n. OK, subsets and any intersecting family of K element subsets of one through n has size at most n minus 1 choose k minus 1. In other words, the construction that we gave just now is indeed optimal. You cannot get a bigger intersecting family of k element subsets of 1 through n. Let us now prove the Erdos-Ko-Rado theorem. And we'll introduce randomness into the problem and use some beautiful ideas from the probabilistic method. So here's how we start the proof. Let us order the numbers 1, 2, 3, and so on through n randomly around a circle. So what do I mean by this? Take a circle and then basically take a uniform random permutation of 1 through n and then place the numbers around the circle in the order of that permutation. So for example, I'm putting down nine numbers, here, n equals to nine, in some circular order chosen uniformly at random. I'm missing a number here. Six. So here, nine numbers. Uniformly around the circle. And let us call a subset of 1 through n contiguous. Here, I'm defining the term "contiguous." I'll call the subset contiguous if its elements, the way they're ordered around the circle, form an arc in this ordering. So for example, the set of numbers 4, 3, 9, 5, this four-element set is contiguous. So 3, 4, 5, 9 is contiguous. Because the order's there-- according to the circular ordering, these four numbers form a contiguous block. On the other hand, the set 1, 3 is not contiguous because they do not form a contiguous block. So that's just a definition. And now for a given set, sub A, a subset of 1 through n with k elements. So for a given k element subset of 1 through n, what is the probability? So here, A is fixed. A is not random. So we're given this A. What is the probability that A is contiguous? So the ordering is random. So given a set, what is the probability that under this random ordering is contiguous? Well, let's think about that. So the ordering is random, but here, if A has k elements, you can think about how many different ways these k elements can be placed around the circle. There are n different positions for where geometrically, this length k arc lies. And once you fix that position, the probability that the elements, the k elements of A actually falls into the desired positions is 1 over k-- 1 over n choose k. So this is the probability that A is a contiguous set under this random ordering. And now by the linearity of expectations, the expected number of contiguous sets in this collection F is equal to the size of F times the probability that each individual set is contiguous. And that was calculated earlier to be this quantity. So n divided by n choose k. On the other hand, the property of F being intersecting, I claim, implies that in any given circular ordering there are at most k contiguous sets for-- so in a given circular ordering. So let's pause and think about why this claim is true. If I give you an ordering and ask, well, how many contiguous sets can there be? And now these contiguous sets, they have to be intersecting. So if we have one set-- so here, let me denote the elements, these circles as-- think about them as positions on the circle. So these dots as positions on the circle. So if that's one set, well, where can the other sets be? Well, they have to intersect this block. So maybe another set is like this one. But if that set is in, then it excludes-- if this blue set is in,l then it excludes this green set from being a possibility and so on. OK. So you see that given that this red set is in, well, at most, one of the blue and green on the second line can be in. And at most, one of the blue and green on the third line can be in. And at most, one of blue and green on the fourth line can be in. And these are the all the possibilities of how another contiguous set can intersect the red set. So that's an argument that can translate into this claim, that F being intersecting implies at most k contiguous sets in any given order. So this argument is very analogous to the argument that we did at the beginning, where we pair up complementary sets. But this time, we're only looking at contiguous sets that intersect a given set. OK. Well, if you can only get at most k contiguous sets in any given order, then this quantity, which is a random variable, it is always at most k. So in expectation, then, this quantity that we got at the end is also at most k. Thus the size of F is at most k over n times n choose k, which one can then expand the binomial coefficient to see that the final quantity is equal to n minus 1 choose k minus 1. And that's exactly what we claimed. All right. So this finishes the proof of the Erdos-Ko-Rado theorem, which is a foundational result in extremal set theory. And it's a beautiful application of the probabilistic method to extremal set theory and to combinatorics in general, where we start with a claim, a theorem, that is not random at all. It is completely deterministic. It's about finding the maximum possible set size, the size of something that satisfies some constraints. And yet the proof goes by introducing new randomness into the setup. That allows us, then, to derive this beautiful conclusion.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Independent_Sets_and_Turáns_Theorem.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: In this video, we'll look at an application of the probabilistic method to graph theory. An independent set in a graph is a subset of vertices with no two adjacent. For example, if the graph is this cycle on four vertices, an example of an independent set would be two vertices like this. And these two vertices do not-- these two vertices are not adjacent to each other, whereas had I chosen two vertices on the same edge, that would not be an independent set. So an important question in graph theory is given a graph, what can you say about the size of its independent sets? The following theorem, due to Caro-Wei, says that every graph G contains a large independent set in the following sense. It contains an independent set of size at least the following quantity, summing over all v among vertices G, 1 over the degree of v plus 1. So let us prove this theorem first, and then we'll see an application and some ways to interpret this result. The proof of this theorem applies the probabilistic method. So we are given this graph G. And the first thing we'll do is order the vertices of G uniformly at random. So consider a random permutation of the vertices of G. And let's consider the set I of vertices defined as follows. These are the vertices in G such that v appears before all it's neighbors in the random ordering. For example, from the graph earlier, had we ordered the vertices in this order-- so that's, say, some random ordering. So let's order the vertices 1, 2, 3, and 4. So the edges are 1 to 2, 2 to 4, 2 to 4, 3 to 4, and 1 to 3. What I would pick out is a set of vertices such that-- so let's see if each vertex would belong to I. The first vertex appears before all of its neighbors. So we put the first vertex in I. The second vertex does not appear before one of its neighbors, 1, so we do not put it in I. And the third vertex here also does not appear before its neighbor. And we do not put it in I. And the fourth vertex, likewise, does not appear before all of its neighbors. And we do not put it in I as well. So in this case, this set picks up only the first vertex. But in more complicated graphs, it can pick up additional vertices. I claim that I is always an independent set. Indeed, it will be impossible to have two vertices in I that are adjacent to each other because one of them would appear before the other in the order, and that would violate the condition on how we chose I. Next, let's think about how large I is. For every vertex V and G, the probability that V is in I-- well, this is the probability that the V appears first among V and its neighbors. Well v has D sub V with degree of V neighbors. And all the ordering are chosen uniformly at random, so this probability is 1 divided by the degree of V plus 1. And thus by the linearity of expectations, the expected size of this set I is equal to the sum over vertices V in G of the probability that this V lies in I. And we just computed this probability as such. This is the expectation. This is what happens on average. And therefore, there must be some ordering that induces an I so there is some I with the size of I at least the quantity that we just produced. So that finishes the proof of the Caro-Wei theorem. Again, it says-- the way to think about this theorem is that if you are given a graph g such that typically one does not have large degrees, then it must contain a large independent set. By considering the complement of this graph G-- by considering the graph complement-- the graph complement means that we switch, flip the edge and non-edges. So this is a graph G. And then this will be the complement of G, So flip the edges and non-edges on G. Independent sets in G become cliques in the complement. The cliques are complete graphs with all the edges present and vice versa. So we can read Caro-Wei's theorem for the complement of G to deduce the following corollary. It says that every n-vertex. graph G contains a clique on at least this many vertices. So the expression inside the summation is the clique of-- is the complement or the degree in the complement. Let us derive a interesting corollary of this corollary. And this turns out to be a pretty foundational result in graph theory known as Turan's theorem. Turan's theorem says the following. If an n-vertex graph has more than the following number, 1 minus 1 over r n squared over 2-- so it has more than this many edges-- then the graph contains clique on more than r vertices. So Turan's theorem gives us some bound on the maximum number of edges that a graph can have if it doesn't have a large clique. And furthermore, this bound here is best possible in the following sense. And here let us restrict to the case when n is divisible by r. When n is not divisible by r, you need to do a similar construction, turns out to be best possible, and this bound can be improved slightly, but not by too much. So here's the example. Let us take the n vertices and split them into equal parts. And there are r parts. So here in this illustration r equals to 3. And let's put in all the edges between parts, but no edge within a part. So this is called the complete r partite graph. So you see that this graph here, one can do a quick calculation to see that it has exactly this number, 1 minus 1 over r times n squared over 2 edges. On the other hand, it does not contain a clique on more than r vertices. The biggest clique you have comes from taking one vertex in each part. So this is an example showing that the statement of Turan's theorem is best possible. Let us prove Turan's theorem as a quick corollary of the earlier result. So by the earlier corollary, G has a clique of size, at least sum over the vertices in G, 1 over n minus the degree of v. And here, let us use the convexity. So in this step, let us use the convexity of the function, sending x to 1 over n minus x. So this is a convex function, which allows us to lower bound the sum by n over n minus the average degree of G. On the other hand, if the graph has more than this many edges, then the average degree is bigger than some corresponding quantity. And putting this into the expression, we can have a final expression strictly bigger than n over n minus 1 minus 1 over r times n. And this quantity here equals to r. So there's a strict equality sign here. In particular, G has a clique of size strictly greater than r, and that finishes the proof of Turan's theorem. Turan's theorem is a important and foundational result in graph theory. And it's a start of a subject that we now call extremal graph theory, concerning what can you say about a graph that has certain properties, like not having a large clique? And this proof that we just saw is a beautiful illustration of the probabilistic method to graph theory that allows us to prove this wonderful result, Turan's theorem, along with many other results that I hope that you will learn in the future.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Linearity_of_Expectations.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: In this video, let us look at a basic yet important concept in probability known as linearity of expectations and use it to deduce some interesting consequences in combinatorics via the probabilistic method. Linearity of expectations says that, if you are given random variables, x1 to xn, and constants, C1 through Cn, then when we take a linear combination of these random variables, as such, C1 times x1-- so imagine these are real valued random variables and these are real constants, for instance-- then the sum of these C1 times x1 plus C2 times x2 and so on plus Cn times xn-- so this sum has expectation-- the following, which can be computed by distributing this expectation symbol across to the individual variables. So this is a basic and important property. And it's worth noting that a similar statement written for products is often not true. So it is not usually the case that the expectation of a product of two random variables is the product of their expectations, unless you're in some special circumstances, such as when x and y are independent or uncorrelated. Anyway, let us focus on the linearity of expectations and see some ways to use this in combinatorial applications. And the first example is the following question, which will be able to give a very quick and clean answer. The question is, what is the average number of fixed points of a permutation of 1 through n chosen uniformly at random? As a reminder, if we have numbers 1, 2, 3, 4-- here, n equals to 4-- then a permutation can be thought as some way to map these numbers, the sets back to themselves, in one to one correspondence. And a fixed point is some number that gets mapped back to themselves. So in this permutation, there are exactly two fixed points. OK, great. Let's answer this question using linearity of expectations. One way to think about the problem is that there are n factorial permutations. And maybe we want to try to go about counting how many of them have zero fixed points, how many of them have one fixed points, how many have two fixed points. But that method can get pretty cumbersome and pretty difficult, quite quickly. However, if we look at this problem through the lens of linearity of expectations, there turns out to be a very quick solution. And the method is to introduce some random variables. Let xi be a random variable that equals to 1 if i is a fixed point, meaning that in this permutation i gets mapped to itself and 0 otherwise. So x sub i is the indicator random variable for the element i being a fixed point of this random permutation. What's the expectation of xi Well, this is the probability that i is a fixed point. Well, this being a uniform permutation chosen uniformly at random, i is sent to each of the elements 1 through n with equal probabilities. So in particular, it is sent back to itself with probability 1 over n. And then the number of fixed points equals to the sum of the xi's-- so x1 plus x2, and so on, to xn. And now, we can take expectation on both sides and apply linearity of expectations and see that, well, each individual term is 1 over n and there are n such terms. So the answer is 1. And that is the answer to this question. So the average number of fixed points of a permutation chosen uniformly at random is exactly 1. And you see that there's a very quick calculation once you get the hang of the idea of linearity of expectations. Let us look at a slightly more interesting example. And for this example, we'll consider the concept of a tournament. So a tournament is a concept in graph theory referring to the following. There are n vertices. Think of n players in some tournament. And between every pair of vertices, we have some directed edge pointing in one of these two directions. So every pair of vertices, there is some edge pointing to one of the two directions. OK, so that's an example of a tournament. I'll need to introduce another concept, which is that of a Hamilton path. So a Hamilton path is a directed path, meaning we travel along the directions of these edges according to their directions. So directed path that passes through each vertex, so every vertex of the graph exactly once, no more, no less. So let's see if we can find any Hamilton paths in this example. Well, I see one. So if we start with a middle vertex and go along this one, this edge, then this edge, and then this edge, so that's a Hamilton path that goes through all four vertices, each vertex passing through it exactly once. And it always traverses along the direction of the edges. Let us prove the following theorem. For every n, there exists a tournament of n vertices with at least n factorial times 2 to the minus n plus 1 Hamilton paths. So in other words, for every n, there is some way to orient the edges of the complete graph on n vertices so that it has lots and lots of Hamilton paths, specifically at least this many. So that's a theorem that we're aiming to prove. We will not prove this theorem by explicitly constructing such a tournament. Instead, we'll invoke the probabilistic method and show that a random tournament has an expectation of this property. So here's the proof. Let's consider a random tournament on n vertices chosen uniformly at random. One way to do this is to take a complete graph on n vertices and for every edge flip a fair coin and use that coin to decide which way the edge orients of the two different directions. Now, let's think about the number of Hamilton paths. So each path of the-- so each-- so each of the n factorial permutations of vertices forms a directed path. OK, so first, consider a permutation of vertices-- for example, 2, 1, 3, 4. And think about what is the probability that the edges that go in the direction of this permutation-- that the edges are oriented according to the permutation. Well, we have to flip n minus 1 coins. And all of them have to come up in such a way so that the edges are pointing in the direction of this permutation, in the order of this permutation. So the probability that this is a directed path for each of these permutations is precisely 2 to the minus parentheses n minus 1. And now, we invoke the linearity of expectations to claim that the expected number of Hamilton paths must then be-- well, each of the n factorial permutations has probability of 2 to the minus n minus 1 of being a directed path. So this is a calculation that is analogous to the calculation that we did in the earlier part of this video. Well, this is what happens in expectation, on average. And thus, there must be some instance where we can beat this average or at least be at least as large as this average. So thus, there exists a tournament with at least this many Hamilton paths. And that concludes the proof of this theorem that we laid out earlier. So this is an example of applying linearity of expectations as a step in the probabilistic method to prove this nice and simple result, that there exists tournaments with lots of Hamilton paths.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Threshold_for_a_Random_Graph_to_Contain_a_Triangle.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: Oftentimes, in applying the probabilistic method, we'll need to understand whether a certain random structure typically has a property that we're looking for. In this video, we'll look at one such property and understand how to analyze whether a certain random graph has the property of containing a triangle with high probability. The objects that we're going to be looking at today is the Erdos-Rényi random graph, commonly referred to as G(n,p). And this is the graph consisting of placing n vertices. And for each pair of vertices, we put down an edge between them with probability p independently for all pairs of vertices. In other words, we flip a biased coin that comes up heads with probability p independently for each possible edge and put down the edge randomly as such. This is a random graph. It's a random structure. It's a random object. And we would like to understand various properties of this random graph. And the property that we're going to look at in this video is the following. Does this random graph typically contain a triangle? In the rest of this video, we'll be looking at p as a function of n. So p is allowed to change, depending on n. Well, we can denote this dependence by a subscript, but out of convenience, we'll drop the subscript for the rest of the video. So G(n,p) Does it typically contain a triangle? Well, by "typically" I mean the following mathematically precise meaning. "Typically" means with probability that approaches 1 as n goes to infinity. So the statements are going to be looking at are asymptotic as n becomes large. The main theorem that we'll prove in this video is the following. Well, there are two statements. I have conveniently labeled them theorem 0 and theorem 1. Theorem 0, sometimes referred to as the 0 statement, is the statement about not having any triangles, having zero triangles. And it says that if n times p-- remember, p is a function of n. p is allowed to depend on n. If n times p approaches 0 as n goes to infinity, then this random graph G(n,p) is triangle-free with probability approaching 1 as n goes to infinity. In other words, with high probability, this random graph does not have any triangles. This makes sense in that this graph is pretty sparse when p is small. So this statement basically says that p is much less than 1 over n, in terms of the growth rate. So when p is very small, you expect not so many edges. And it turns out that there's not going to be a triangle typically in this random graph. The second part is 0 and 1. The 1 statement. And it says that when (n,p) approaches infinity, again, p is a function of n, then this random graph G(n,p) typically contains a triangle. IE, it contains a triangle with probability that approaches 1 as n goes to infinity. So if you have a high edge probability, then typically, you would expect to see a triangle in a random instantiation of this random graph G(n,p). One way to summarize both statements is the following. One over n is a threshold for the property of containing a triangle. To interpret the sentence, imagine if p is much larger than one over n. Then we're in the theorem 1 setting. Then, the random graph typically contains a triangle. Whereas if p is much smaller than 1 over n in the sense of being in the theorem 0 setting, then typically, the random graph does not contain any triangles. So 1 over n is a threshold. Depending on whether the edge probability p is much larger or much smaller than 1 over n. we expect very different behaviors from the perspective of whether the graph contains a triangle. These types of statements are widely studied in combinatorics and probability and understanding how to establish and prove these type of statements are some of the bread and butter of probabilistic combinatorics. So in this video, let us prove the statement in this box. Let's start with the easier direction, which, in this case, is the first statement, theorem 0. So let's prove theorem 0. We're looking at triangles. So let us define a random variable, x, to be the number of triangles in the random graph G(n,p). We can compute the expectation of x with not much difficulty, because by linearity of expectations, there are n choose three triples of vertices. And each triple forms a triangle with probability p cubed. We see that this quantity here is on the order of n cubed p cubed. We get to drop constant factors in this asymptotic estimate. Now when (n,p) goes to zero, as n goes to infinity, n cubed, p cubed also goes to zero. So this quantity is little o of 1. It's a quantity that goes to 0 as n goes to infinity. Let us now apply Markov's inequality. We explained and proved Markov's inequality in a different video. By Markov's inequality, we find that the probability that x is at least one is, at most, the expectation of x divided by 1. And from earlier, this quantity is little o of 1. All right. So the number of triangles is typically small in expectation. So it's small in expectation. And therefore, the probability that there is at least one triangle goes to 0 as n goes to infinity. Thus, the previous statement is equivalent to the statement that the probability that there are zero triangles approaches 1 as n goes to infinity. This finishes the first part of the theorem. In the first part, the edge probability is very small, and so the expected number of triangles goes to 0 as n goes to infinity. And by Markov's inequality, we see that typically, there should not be any triangles in this random graph. Let us now move on to the second part. In the second part, namely theorem one, if we use the setup that we had earlier, we see that the number of triangles in expectation goes to infinity as n goes to infinity. However, this does not immediately imply that x is typically positive. Because it could be the case that x is still typically zero, but is very, very large a diminishing fraction of the time. And in that case, we would still be able to have the statement that the expectation of x goes to infinity while it would not be true that x is typically positive. So we'll need additional ideas to establish that x is typically positive. And what we will do is to use the second moment bound to show that x is typically close to its mean. So the x is very concentrated around its expectation. And in that case, having expectation that goes to infinity would then imply that x is typically positive. So let's carry out this idea. Let's consider the following probability. We see that the probability that x is equal to 0. Well, if x is equal to 0, and here x is some non-negative quantity, then this statement is equivalent to the statement that x deviates from this mean in the downward direction by minus the expectation of x. So this is the same as x. Less than equal to 0, but x is always non-negative. So these two probabilities are equal to each other. We can then relax the event inside as such. Again, noting that the expectation of x is always non-negative. Now we apply a Chebyshev's inequality. So in this step, we can apply Chebyshev's inequality. And you'll find an explanation and proof of Chebyshev's inequality in a different video, which allows us to conclude that this probability is, at most, the variance of x divided by the expectation of x squared. All right. So what have we learned from this calculation? Let me state a corollary. If the expectation of x is much smaller, so if the variance of x is much smaller than the square of expectation, then the right-hand side goes to 0 as n goes to infinity, which then implies that x is positive with high probability, meaning that this probability that x is positive goes to 1 as n goes to infinity. So this is the conclusion that we will use to establish theorem 1. So the number of triangles, we'll be able to write in the following way. x is equal to the sum over all triples of vertices of the following quantity, xij, xik, xjk, where we define these x sub ij's as follows. x sub ij is equal to 1 if ij is an edge and zero if ij is not an edge. The vertices are labeled by numbers 1 through n. So you see this sum, which is indexed over triples of vertices, each term is 1 if these vertices i, j, and k is a triangle and is 0 otherwise. So this is the number of triangles in the graph. And let me, just because it will make the notation a little easier, rewrite this quantity as such. Here, t ranges over all triples of vertices. Onward the triples of vertices, and x sub t is this term so we can rewrite this term as just a single variable x sub ijk. To apply the corollary, we will need to get some estimate from the variance of x. And we'll do so through covariance. So as a reminder, the covariance of two random variables, say y and z, is defined to be the expectation of y times z minus the expectation of y times the expectation of z. In particular, if y and z are independent random variables, then their covariance is 0. And the variance of a random variable x is equal to the covariance of x with itself. So let us compute the variance of x, x being the number of triangles. We can rewrite it as a covariance of x with itself. Here, x, we can write it as a sum of various terms indexed by triples of vertices. I'll use t for the first index in the first sum and t prime for the indices in the second sum. The nice thing about covariance is that it is a bilinear function. If you split y in the covariance of y and z, if you split y into a linear combination of various sums, then you'll see that the dependence in this formula is linear on splitting y, so you will be able to write the covariance correspondingly as a sum. So given this covariance of these two sums, we can distribute this quantity and write it as the covariance over all pairs of triples T and T prime, the covariance of x sub T against x sub T prime. Now we need to understand this term, the covariance of x sub T against the covariance of x sub T prime. Towards this end, let us think about what happens to this term. So let me do a side calculation. From the definition of covariance, we see that this quantity equals to the expectation of x sub T, x sub T prime minus the expectation of x sub T times the expectation of x sub T prime. There are a few cases that we need to consider. First, T and T prime are two triples of vertices and they could be in various positions. The corresponding full triangles could be disjoint, or they could overlap. And we need to consider all of those possibilities. Now before doing those cases, let's observe that this quantity here, if you have, for example, t and t prime being in any number of configurations. So these are the vertices of T and this is the vertices of T prime, for instance. So they could share some edges or they may overlap in some edges. But no matter what the configurations are, the first term always equals to p raised to the number of edges contained in the union of T and T prime. So in the complete graph. So what I mean here is the following. So here is p raised to the number of-- so p raised to-- let me still write it this way and then we'll see some specific examples. Minus p raised to the number of edges in T and p raised to the number of edges in t prime. So here, the exponents over correspond to the number of edges in the complete graph spanned by these vertices, T and T prime. So let's look at a few different possibilities. The first is if the two triangles, T and T prime, do not intersect in any edges. So the number of vertices they intersect is at most one. So the two scenarios are the one that I just drew and if these two triangles are even disjoint in their vertex set. So then there's no edge overlap, in which case the edges-- so the random variables that correspond to the edges of these two triangles, they are independent of each other. And so the covariance should be 0. And indeed, the first term here, you have p raised to the power 6 and then minus p cubed and p cubed. And so in this case, this covariance is 0. In the second case, when these two triples T and T prime overlapping exactly two vertices. Well, here they are-- so the union of these two triangles-- so I think I misspoke earlier. So by this notation, I really mean viewing T as a triangle and T prime as a triangle, and looking at the union of these two triangles. So the union of these two triangles have five edges. So that's the number of terms here, dismissing repetition in the first term, the number of factors. And minus the later two terms are both p cubed and p cubed. Finally, if T and T prime are identical, then the number of edges in the first term is 3 minus p cubed p cubed. So recall that p is some quantity that goes to 0 as-- well, it doesn't have to go to 0, but we-- p is some quantity that for now you can think of as going to 0 so that the first term is always dominant. Even if p is constant up to a constant factor, still, the first term is dominant. So let us now continue the earlier conversation-- the earlier computation. And we see that this sum is equal to the following. There are-- we need to think about how each of these cases can arise. The first case contributes zero to the sum, so we don't have to worry about it. The second term, the second possibility contributes the following. It contributes n choose 2. Well, we need to count how many different configurations can we have like this, where you have these two, the red and the blue triangles overlapping by an edge. So this n choose 2 for the two overlapping vertices and n times n minus-- times n minus 2 Times N minus 3 for the remaining two vertices. And in this case, we have p to the 5 minus p to the 6. And then the third possibility, n choose 3 times p cubed minus p to the sixth power. This is some expression, and we only need to get some bounds so we can play around with asymptotics and not think about constant factors. So we'll basically ignore the constant factors and do an upper bound. So up to a constant factor. So big O. The first term here involves n to the fourth p fifth power. So we can ignore the negative contributions. They don't matter for this calculation. And then the second term here is n cubed p cubed. At most, on the order of n cubed p cubed. So under the hypothesis that np goes to infinity, Both these terms are little o of n to the six. p to the sixth. So that's something that one can check from the from the hypothesis. Thus, the variance of x is little o of the expectation of x squared. And so by the corollary of Chebyshev earlier, we see that with probability approaching 1, x is positive. So that there's-- typically, this random graph contains a triangle. And that finishes the proof of the second part, this theorem 1. So to recap, we showed that 1 over n is a threshold for the property of the random graph G(n,p) to contain a triangle. So the statement has two parts. The first part is that when p is quite small, it's much smaller than 1 over n, then typically, this random graph does not contain a triangle. And the way we establish this fact is by showing that the number of triangles, x, the number of triangles, has expected value going to 0. And thus, by Markov's inequality, this number of triangles is 0 with high probability. The second part, where p now is much larger than 1 over n, here, the expected number of triangles goes to infinity as n goes to infinity. But that alone does not allow us to conclude that the number of triangles is positive typically. Instead, we'll need to apply a second moment argument to show that the number of triangles is typically concentrated around its mean. And that was a calculation that we did. This was a second moment calculation, namely understanding and computing the variance of this random variable and showing that the variance is much smaller than the square of the expectation from which then we can use Chebyshev's inequality to deduce that the number of triangles is typically positive. And combining these two proofs together, we obtain this threshold statement. Now what we just showed is an important, but fairly basic application of using probabilistic methods to understand typicality statements. And these types of statements and proof techniques are quite ubiquitous in studies involving the probabilistic method and beyond. And there are much more difficult results of similar flavor, but that involve much more advanced techniques. And what you saw here is simply a taste of what such a statement and its technique could look like.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Crossing_Number_Inequality.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: In this video, let us look at an application of the probabilistic method to graph theory. We'll prove what is known as the crossing number inequality. Now what is the crossing number of a graph? If I give you a graph, sometimes it's possible to draw the graph on a plane without crossings. For example, the complete graph on four vertices can be drawn on the plane without having any pair of edges crossed. Such graph is called a planar graph. But sometimes, it's not possible to draw such a graph on the plane. And the classic example of such a graph is K5, the complete graph on five vertices. And then in that case, maybe you want to know what's the minimum number of edge crossings you can have in any possible drawing of this graph? So the crossing number of a graph is defined to be the minimum number of edge crossings of a drawing of G of this graph G on the plane using continuous curves as edges. It is a classic fact that K5, the complete graph on five vertices, does not have a planar drawing. But it is possible to draw this graph using only one crossing. And we can do this by adding one vertex to this K4 drawing and draw edges like this. And the final edge, we need to do a crossing over here. So this example illustrates that the crossing number of K5 is 1. If you give me a graph with lots and lots of edges, should I expect that its crossing number is necessarily high? Turns out that's what the crossing number inequality always guarantees. The theorem says that in a graph, G, with vertex set E, vertex set V, and edge set E, if E, this edge set, is at least four times the number of vertices, then the crossing number of G is at least the following quantity. So on the order of the number of edges cubed divided by number of vertices squared. Here, c is some constant, some absolute constant. So this is a theorem that we'll prove in this video. Before proving it, let me illustrate one corollary to-- let me state one corollary to illustrate how to think about this result. If the number of edges in this graph is on the order of the number of vertices squared. So this notation here means that it is larger than some constant V squared. So here, c doesn't have to be the same as c earlier. Then plugging this hypothesis into the theorem, we obtain that the number of crossings of g is at least on the order of V to the fourth power. So if it has a lot of edges, then the graph must have a lot of crossings. And in fact, this is the right order of magnitude, because you can only have, at most, E squared number of crossings. Each pair of edges cross at most once, even if you just lay out all the vertices in general position on the plane and draw the edges as straight lines. OK. So in the rest of the video, let me demonstrate the proof of the crossing number inequality. I'll split the proof into three steps. The first step is an analysis of planar graphs, namely graphs with no crossings. The following turns out to be true, that if G is a planar graph-- so "planar" means that it is possible to draw G in the plane without edge crossings. Then-- so in other words, the crossing number of G is equal to 0. So then the conclusion is that the number of edges of G is, at most, three times the number of vertices. Another way to say the conclusion is that the average degree of this graph is at most 6. This conclusion can be deduced from effect from topology known as Euler's formula, which says that if you draw the graph in the plane in some way-- so for example, this is some drawing, then the number of vertices minus the number of edges, plus the number of faces equals to 2. Here, by face, we count each individual cell as well as the outside cell. So this drawing has three faces. And it has four vertices and five edges. And you can verify that this Euler's identity is true in this case. It is also true in general. Starting with Euler's formula, we can then put some bounds on the number of edges, and faces, and vertices, relating them to each other, using some inequalities to derive this conclusion. And so let me not do that step. I refer you to the lecture notes if you want to see the details. The second step of the proof allows us to go from one crossing to many crossings. So here's what happens. So given a graph, G, it may not be planar. So maybe it will require some crossings to draw G in the plane. But starting with such a crossing, we can delete some edges to make it planar. So given G, by deleting the crossing number of G edges, we can make it planar. So if you had-- so in the earlier example of K5 where this is not planar, but there's only one crossing. And by deleting one of the edges corresponding to that crossing. And we remove one edge and we end up with a planar graph. Well, in this planar graph, the inequality in step one must be satisfied. So the number of edges of g minus the crossing number of g, so these are the deleted edges, must be, at most, three times the number of vertices of G. Rearranging this inequality, we obtain the lower bound on the crossing number being the number of edges, minus three times the number of vertices. So the idea here is really to go from one crossing, namely that if this inequality, E less than equal to 3V is not satisfied, then we get at least one crossing. But then keep on deleting these crossings. We can go from one crossing to many crossings. Let's pause and examine this bound that we just proved. It is a valid bound, but it's not that great. For example, in this corollary, when E is on the order of quadratic in number of vertices, the conclusion that we will get at this stage is also quadratic in the number of vertices, which is not as high as we would like, which is the fourth power in the number of vertices. So this brings us to the third step. And this is a step where we'll use the probabilistic method. And this step, I'll call the bootstrapping step, where we go from a weak bound to a much stronger bound by using sampling. In this step, let's consider some p. We'll decide this p later on, so to be decided. But it's some number between 0 and 1. And let's consider a subgraph of g obtained by keeping every vertex of G with probability p independently at random and deleting the other vertices. When we delete the other vertices, we also throw away all the edges that are adjacent to the deleted vertices. This process then produces a different graph, G prime, which contains a subset, V, of vertices, the V prime. So those are the vertices that are kept by this random process. And then E prime is a set of edges that were kept, namely the edges of G that fall between the remaining vertices in V prime. Let's think about this graph, G prime here. Well, it is a graph. So whatever we proved in step two still applies to this smaller graph. Namely, the crossing number of G prime is at least the number of edges in this G prime, minus 3 times the number of vertices in this G prime. So this is a true and valid inequality. Well, remember that G is given as a graph that is deterministic, but we introduce some randomness to produce G prime. G prime is a random graph. So this inequality is true for every instance of this random graph, but it's also true, then, in expectation. So let's take the expectation of both sides. The expectation on the left and the expectation on the right, which, by linearity of expectations, we can distribute the expectation into each term. Let's think about the various terms. The easiest one to think about is the expected number of vertices that remain while each vertex is kept with probability p. So the expected number of remaining vertices is p times the original number of vertices. Next, the expected number of edges. Well, an edge is kept if both its endpoints remain. And the probability that both its endpoints are chosen is p squared. So this quantity is p squared times the original number of edges. And finally, and the slightly tricky one, is the expected number of crossings. Now if I have a crossing in my original graph, G, one way to go about thinking about G prime is to, well, take the same planar drawing. Take the same drawing of G on the plane and use that drawing for G prime. So G prime is obtained by keeping some of the vertices. And so if we use the same drawings, then this crossing is kept with probability p to the fourth. Now G prime is a different graph. It's a smaller graph. So potentially, there's a different way to draw G prime that produces even fewer crossings. But nevertheless, we have an inequality, because we could have always kept the cross-- the drawing from G. And so we do always have this inequality here. So this inequality comes from keeping the same drawing, but keeping in mind that it doesn't have to be an equality, because we could have chosen a different drawing for G prime. OK, so we derived this inequality here. And by bringing this p to the fourth power to the other side and rearranging, we obtain the inequality that the crossing number of G is at least p to the minus 2 times the size of the edge set times 3 to the p to the minus 3 times the size of the vertex set of G. p was some parameter that we have yet to specify. And from this expression, then we can optimize for the value of p. And one way to do this, and because we only look for a bound that is up to a constant factor, is to set p to be 4 times the number of vertices divided by the number of edges of the graph. And it is important here that in the hypothesis of the theorem, the number of edges is at least four times the number of vertices, so that this p is indeed between 0 and 1. If p ended up being some number bigger than 1, then this proof would not have made sense. It would not make sense to choose with probability bigger than 1. But p is between 0 and 1, so this makes sense. And with this choice of p, continuing the earlier expression, we find that it is equal to one over 64 times the number of edges cubed divided by the number of vertices squared. And that finishes the proof of the result, showing that the crossing number inequality holds with c being the constant 1 over 64. Although, the precise value is not so important. This is a beautiful demonstration of the probabilistic method. And let me review what happened in this proof. First, we looked at what happens to planar graphs, those with no crossings. And here, we derived some bound on the number of edges that such a graph can have. And then we said that if you start with a graph, and then remove all its crossings by removing one edge according to each crossing, we can then deduce some lower bound on the number of crossings from the first step. Then it gives us this fairly weak bound-- lower bound on the number of crossings. And the final step, I think, is the most interesting mathematically. It's this bootstrapping step. We're starting from a fairly weak bound. We then take our graph G and sample it down to a smaller subgraph and apply the weaker bound to the smaller subgraph. This allows us to boost the weak bound into a much stronger bound that turns out to be the-- what is claimed in the result in the theorem. So this finishes the proof of the crossing number inequality. And it's just one of many beautiful demonstrations of the probabilistic method in combinatorics and graph theory.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Lower_Bounds_to_Ramsey_Numbers.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: The probabilistic method in combinatorics is a powerful way to demonstrate the existence of certain special configurations in combinatorial objects by introducing randomness. In this video, we will see one of the earliest examples of an application of this method and its use to prove lower bounds to Ramsey numbers. Now, Ramsey numbers are interesting objects in combinatorics, and they are defined as follows. The Ramsey number Rkl is defined to be the smallest integer n, such that no matter how we color the edges of a complete graph on n vertices, we'll denote such objects by kn. So a complete graph is n vertices with all the edges, all n choose two edges available. And if we color all such edges, each one of them, either with red or green, so with one of two colors, then there always exists a red kl, red clique on k vertices or green clique on l vertices. So the Ramsey number is the smallest n such that this is possible. So to be more concrete, let me demonstrate an example. So R 3, 3 equals to 6, and what this means is the following. First, if we have six vertices and we color all the possible 6 choose two edges, each one with one of two colors, red or green. So for example, maybe these edges are colored green and the remaining edges are colored red, then no matter how we do these colorings, there always exists a triangle, which is completely red or completely green. In this example here, you see this. There's a triangle that is completely red. On the other hand, if we start with only five vertices, then it is possible to color the edges with one of two colors. So color each edge with one of two colors so that there is no monochromatic triangle. So this is what it means for R 3, 3 to be 6. An important result in combinatorics, due to Ramsey, so this is known as Ramsey's theorem, and this was proved back in the 1920s, says that this number, Rkl is always finite. It is always a well-defined number. So as long as n is large enough, no matter how we color the edges of a complete graph on n vertices, there always exists a large monochromatic, either a large red k clique or a large green L clique. What I want to prove in this video is the following theorem due to Erdos from the 1940s. An Erdos showed that Rkk is bigger than 2 to the k over 2 for every integer k bigger than 3. So this is a lower bound to the Ramsey numbers. And what this says that if you have too few vertices, namely if you have 2 to the k over 2 vertices, then there is always a way to color the edges with two colors so that there is no monochromatic clique on k vertices. Erdos's theorem is an important foundational result and is also one of the earliest demonstrations of the probabilistic method. So we will see this proof in this lecture, in this video. In fact, we will prove a slightly more general statement. We will show that if two integers, n and k, satisfy the following inequality, n choose k times 2 to the 1 minus k choose two, if this quantity is less than 1. So right now, this is just some formula. It will come out of the proof. So let's not worry too much about it. Then Rkk is strictly larger than n. We will prove the second theorem and, by a calculation that is fairly straightforward, which I will admit, one can use the second result to imply the first result. OK, so let us prove the second theorem over here. So what is this saying? If n satisfies certain inequalities, so the n is not too large, then it is possible to color the edges of a complete graph on n vertices so that there is no monochromatic clique on k vertices. How do we find such a coloring? Well, let's do it randomly. So let's color the edges of a complete graph on k vertices using two colors uniformly at random, so meaning that for each edge, we flip a coin. If it comes up heads, we color it red. If it comes up tails, we color it green. And the goal is to show that we can do this in such a way that avoids having a monochromatic clique of size k. So there are things that we want to avoid, and let us encode these bad events as follows. For every k vertex subset s of vertices, there are n vertices in total. So for every k vertex subset s, we'll use A sub s to denote the following bad event that we're trying to avoid. This is the event that s induces a monochromatic clique. OK, it induces a monochromatic clique in this coloring. Let's calculate the probability that this occurs. So what's the probability that in this s, which let's say k is 4 for purpose of this illustration, of the six edges, all of them have the same color. Well, there are two possible colors, and for each of those colors, the probability that all of these edges are of that color is this quantity, 2 to the minus k choose two. So this is the probability 2 times 2 to the minus k choose two. This is the probability that s, all the edges in s are entirely of the same color. Well, what about the probability that there is some clique of size k in the graph that's monochromatic? That's really the quantity that we care about. So let's consider the probability that there is some monochromatic clique of size k in this graph. Well, if there is some monochromatic clique of size k, then it must be one of the S's. So we can do what's called a union bound to upper bound this probability by the sum of the individual event probabilities. So summing over all subsets s of the vertices with size exactly k of the probability that A sub s this event occurs. In this summation, there are n choose k terms, and each term is this probability that we calculated earlier. And you see this expression. It was the expression in the hypothesis of the theorem. And we assume that it is strictly less than 1. And this is why we had this expression in the theorem. OK, great. So the probability that there is some monochromatic clique of size k is strictly less than 1, and thus with positive probability a random coloring has no monochromatic clique of size k. And that's basically, what we are trying to do. We're trying to demonstrate the existence of a coloring with no monochromatic clique of size k. And we show that, in fact, by doing everything at random, the random coloring succeeds with positive probability. And therefore, such an object must exist. Thus such a coloring exists. And this finishes the proof of this theorem over here that we were trying to demonstrate. And as I mentioned earlier, by a more routine calculation, which we will not do here, one can deduce Erdos's result from this theorem. And in fact, by a more careful calculation that, again, I will not do, one can show an even more precise lower bound, that Rkk is bigger than the following asymptotic 1 over the following constant e times root 2. So here e is the natural constant, around 2.71, plus a term that goes to 0 as k goes to infinity times k 2 to the k over 2. So this is a much more precise bound that one can get out of analyzing the consequence of this theorem that we just proved. In the next part of this video, we will refine or come up with additional techniques that allows us to improve this lower bound. But for now, let me make a comment about this technique. What this theorem shows is that by doing a random coloring there is a positive probability that this coloring gets us what we want. And in fact, the probability that this coloring doesn't get us what we want is this quantity here, which is not only less than 1, but it is actually quite small. It is very, very small. So if you, for example, on a computer flip these random coins and come up with this coloring, with very high probability, it will succeed. However, it will be very difficult to verify that you have succeeded because the number of cliques that one needs to check is n choose k, which grows very quickly as a function of k. So this is a phenomenon that is sometimes called finding hay in a haystack. A random coloring works with overwhelming probability. And so if you just pick something at random, it will work. But one, it is very hard to check that what you found actually works. And two, it is an active research direction and what seems to be a very difficult problem to find other ways that does not involve randomness but maybe even involving randomness in some other ways to guarantee that what you found works. And this finding hay in a haystack describes this phenomenon, which is quite counterintuitive, that-- and also demonstrates the power of the probabilistic method, that by just doing everything in random, it works really, really well. But we don't understand this process as much as we would really like to. So now you've seen a proof of Erdos's seminal result that gives a lower bound to Ramsey numbers. In the next two segments of this video, we will see a couple of refinements to this technique that allows us to get slightly better lower bounds by introducing new ideas to the probabilistic method. Now, let us introduce an additional idea to the probabilistic method to prove a slightly better lower bound on Ramsey numbers. And this is a method of alterations called the alteration method. What we'll prove is the following. So let me write down the statement. It's slightly technical. The statement itself is not as important as the ideas I will introduce, so bear with me for a second. So theorem says that for any positive integers k and n, the Ramsey number, Rkk, is bigger than the following quantity, n minus n choose k times 2 to the 1 minus k choose two. OK, so that's the theorem. It gives you a lower bound on Ramsey numbers. So, by optimizing over the value of k as a value of n, as a function of k, we can deduce the following corollary. And I will not show this deduction here because it's a more routine calculation. So the corollary is that Rkk is bigger than the following quantity, 1 over e-- e is the natural constant 2.71, and so on-- plus little 1-- a term that goes to 0 as k goes to infinity-- times k times 2 to the k over 2, OK? So that's the asymptotic lower bound that one obtains by taking this theorem and optimizing the value of n as a function of k. We see that it is slightly better than the bound that we got previously, is slightly better by a factor of root 2. OK, so let us prove this theorem here. Like before, we will start with a random coloring but then make some adjustments. So in fact, this construction will have two steps. Construct a random coloring in two steps. In the first step, We'll. Do what we did previously just to randomly color each edge with one of two colors uniformly at random. OK, so flip a coin for each edge, and color it with one of the two colors. Now, this coloring might have some monochromatic k cliques, in which case, we will delete a vertex from each such k clique and to destroy all the monochromatic k cliques. So the second step is to delete a vertex from each monochromatic k clique. OK, so after this process, see, what we are left with is an edge coloring of some complete graph where there are no monochromatic k cliques left because we've destroyed all such monochromatic k cliques in the second step of this process. How many vertices are left? So we started with initially n vertices, but in the second step we deleted some. So we don't have as many vertices as we started with initially. So let's ask ourselves, how many vertices are left at the end of this process? Towards this goal, let me introduce a random variable x. And x is the number of monochromatic k cliques at the end of the first step. Well, the expectation of x can be computed as follows, which is very similar calculation to what we did earlier. There are n choose k possible k cliques, and each one of them is monochromatic with probability 2 to the 1 minus k choose two. So that's the expected number of monochromatic k cliques. Now, in the second step of this process, we delete some number of vertices and one vertex for each monochromatic k clique. Although, some vertices may be used to destroy more than one monochromatic k clique. So the number of vertices that we delete is not necessarily x, but it is no more than x. So delete at most the size of x many vertices. And thus, the final graph that we get at the end of this two-step process has at least n minus the size of x vertices. Let me remind you that this is a random process. The final graph and this coloring that we get is random, and x is also a random quantity. This is a number of vertices, and this quantity here has expectation n minus expectation of the size of x. Oh, so x is already a size. So let me-- I don't have to use the absolute value bars. So the expectation is n minus n choose k 2 to the 1 minus k choose two. OK, so the final graph has at least this many vertices in expectation. Well, that's the average number through this random process, and thus, there must always be, with some positive probability, some graph that beats this average. So thus, or therefore, with positive probability the remaining graph has at least this many vertices. And furthermore, it has no monochromatic clique on k vertices because we destroyed all such cliques in the second step of this process. OK, this finishes the proof of this theorem, which, as you see in the corollary, gives you a slightly better lower bound compared to what we saw last time. In the third segment of this video, we'll see yet another technique that gets us even further along to prove an even better lower bound on the Ramsey numbers. Earlier, we saw two different approaches to using the probabilistic method to lower bound Ramsey numbers. The first method was by taking a union bound to show that the sum of all the probabilities of bad events is fairly small. And the second method is through alteration, where we start by constructing some random coloring and then fix the blemishes. So we show that by removing the bad parts, removing the vertices that contribute to monochromatic cliques, we can get a coloring with no monochromatic cliques. Now let me introduce a more advanced method that can get us even further. So the general motivation, that we already saw a couple of times, is that we wish to avoid a certain collection of bad events, which we'll denote E1 through EL. So these are bad events. In our case, the bad events correspond to having a monochromatic clique in the random coloring. There are some extreme situations that are easier to handle. And these extreme situations are one of two types. The first is when the events collectively have very small probability, even when you just sum the probabilities of these events. So for example, if the sum of the probabilities of these bad events is less than 1, then we can apply the union bound to deduce that with positive probability none of the bad events occur. So that's an easy situation to handle. Another easy situation to handle is if all the events, so all the bad events are independent. So if they are independent events, then the probability that none of them occur is equal to the product of the probabilities that the individual events do not occur. And this is positive, provided that all the individual event probabilities are strictly less than 1. Now, this situation typically is not the case in a lot of applications, including the one that we're considering. For example, having two triangles, if they intersect at an edge, if they overlap by an edge, then being-- or if you have two cliques that intersect in some large portion, then having them individually being monochromatic are not independent events. So what we're often dealing with is situations where there is some dependencies, so some event dependencies. And often, these are the hardest and most interesting situations that one has to deal with in the probabilistic method. A powerful tool that we will see is the Lovász local lemma, OK? So let me explain what is a Lovász local lemma and then apply it to the problem of lower bounds to Ramsey numbers. So here is a version of the statement of the Lovász local lemma and specifically applied to a setting known as the random variable model, random variables model. The setting is as follows. Suppose we have x1 through xv, and these are random variables. And importantly, they are independent random variables. So think of them as independent coin tosses, the outcomes of independent random coin tosses. We'll have B1 through Bm denote index sets. So they are subsets of numbers 1 through v. And for each i ranging from 1 through m, let Ei be some event that depends only on the variables indexed by Bi, namely, the variables x sub j as j ranges over the elements of Bi. So this is a situation that captures a lot of applications where you have bad events, for example, cliques being monochromatic in a coloring of a large graph. And some of these events may have dependencies because the events are based on some underlying coin tosses, underlying independent random variables. And the different events may involve overlapping variables. And so this is capturing that kind of setup. Now, let me introduce some hypotheses that captures the notion of a weak amount of dependence, a small amount of dependence. So suppose for every i from 1 to m, the set, the index set Bi is disjoint from all but at most d other Bj's. So d here is some other parameter in the theorem. So each Bi has overlapping variables with a small number of other Bj's. And furthermore, another hypothesis is that each bad event occurs with some probability which is not too large, so 1 over d plus 1 times a constant E. Here, E is the natural constant, 2.71 so on. And this is true for every i. So let me just write it a little bit more clearly like this. OK, so these are all the hypotheses. So there's a lot, but roughly speaking, it's saying that every bad event is related to only a small number of other bad events. And each event occurs with not too high probability, which depends not on the total number of events, but only on this d, which is the number of other events that it is associated with. The conclusion is that the probability that none of the bad events-- so the bad events are the events indexed. So none of the bad events you went through Em occurs is strictly positive. So with positive probability, none of these bad events occur. OK, so that's the statement of the Lovász local lemma. And now let us apply this local lemma to prove a even better lower bound than what we saw earlier for Ramsey numbers. We will not be able to prove the local lemma in this video. So its proof is, although not long, quite intricate. And I refer you to the references for a proof. So let us now see a lower bound to Ramsey numbers and this is a result due to Spencer, Joe Spencer from the '70s. And such that if-- if k choose 2 times n choose 2, or n minus 2 choose k minus 2 plus 1 times 2 to the 1 minus k choose 2 is less than 1 over E, then the Ramsey number, Rkk, is strictly bigger than n. So, as with earlier, this is some mysterious-looking formula. And I want you to not worry too much about it. It will come out of the proof. What may be more illuminating is the consequence of this theorem. So as a corollary, Rkk has the lower bound 1-- so a lower bound-- the following asymptotic root 2 divided by E plus little 1 k times 2 to the k over 2. And this yet beats the bound that we saw earlier by another factor of root 2. This may seem like a small improvement, and quantitatively, indeed, it's only a constant factor more. Whereas, the remaining asymptotics, we do not know how far it is away from the actual truth. But yet, this result proved in the '70s is the best known lower bound to date for diagonal Ramsey numbers, these Rkk's. OK, so let us now see the proof of this theorem of Spencer, which applies the Lovász local lemma. As earlier, we will begin by coloring the edges of the complete graph on n vertices with two colors uniformly at random. And also, similar to the first proof that we saw, for every k vertex subset s, let us define a bad event, E sub s to be the event that s induces a monochromatic clique on the vertices in s. And we also saw earlier that the probability of this bad event is 2 raised to 1 minus k choose two. OK, let us see how this setup corresponds to this random variable setup earlier. The variables correspond to the coloring of the edges. So in other words, these are indexed by the n choose two edges of the complete graph, Kn. So there are n choose two such variables. Well, what about this condition here? So Bi's, so each Bi is a set of size k choose two corresponding to the edges of this complete graph inside this clique of size k. So when do the Bi's overlap? So if s and s prime, so both k vertex subsets of vertices, so if these are both k vertex subsets, then their cliques overlap in edges if and only if the sets of vertices s and s prime overlapping at least two vertices. So as an illustration, if you have this clique of size 4 and another clique of size 4, then they overlap in at least one edge, if and only if the vertex sets overlap in at least two vertices. So to apply the local lemma, we need to check how many other s primes intersect a given s in terms of their edge set. And this is a calculation that we can do as follows. For each k vertex s, the number of k vertex s primes that has intersection size at least two with s is at most following. Well, I need to pick a pair of vertices of s where there is this overlap and then choose n choose-- and then select the remaining vertices of s prime from the remaining n choose two vertices of the entire clique. There are some possible overcount here because some s primes may be counted in more than one way through this formula here, but it is certainly an upper bound. And now we can apply the local lemma. By applying the local lemma, one checks that as long as this hypothesis inequality is satisfied, this hypothesis in the statement of the local lemma is satisfied as well. So this inequality here corresponds to this inequality in the hypothesis. So this, it is satisfied, and so we can apply the local lemma to deduce that with positive probability none of the bad events, so none of the bad events Es occur for ranging over all k vertex subsets s in the graph. And therefore, with positive probability, none of the k vertex subsets induce a monochromatic clique. And that's exactly what we're trying to show with this result over-- to this claim over here. And this concludes the proof of Spencer's theorem. So you see that this is a demonstration of the powerful Lovász local lemma. And it is a very interesting method that was pioneered by Lovász and has had important applications subsequently in the development of the probabilistic method. Although we did not see the proof of the local lemma here today, I hope that you can at least appreciate its application. And throughout this lecture, throughout this video, we saw three different approaches to lower bounding, the Ramsey number, the diagonal Ramsey numbers using different ideas from the probabilistic method. It is a powerful method, and it has lots of other applications and many beautiful applications to various areas of combinatorics and beyond.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Large_Bipartite_Subgraph.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] YUFEI ZHAO: The probabilistic method in combinatorics is a powerful method that can be used to demonstrate the existence of some object by introducing randomness. In this video, we'll look at a basic example illustrating an application of the probabilistic method. We'll show this theorem over here, which says that if G is a graph with m edges, then G has a bipartite subgraph with at least m over 2 edges. So let me remind you the definition of a bipartite graph is one where I can partition the vertices into two halves or two sets of vertices. And all the edges go from one side to the other. So that's an example of a bipartite graph. And what this theorem is saying is that if we start with any graph G with some number of edges-- in this case, five edges-- I can find a pretty large subgraph by keeping at least half of the edges of G. For example, we can do this, keeping four of the 5 edges to get this subgraph, G prime. And this graph, G prime, is now bipartite because I can put these two vertices on one side and these two vertices on the other side, so that you see all the edges go from the red or the pink vertices to the green vertices, and none between the pink and none between the greens. So this is an example of finding a large bipartite subgraph in a graph G. Now, let me prove this theorem, illustrating the probabilistic method. So we're given this graph G. And what we'll do is assign a color, which will be either black or white uniformly and independently at random to each vertex of the graph G. So imagine flipping a coin for each vertex and coloring each vertex, Black or white. After doing this, we can put all the white vertices of G on one side and all the black vertices of G on the other side. OK. So now, G has some number of vertices. And there will be some edges going across from white to black, some edges within white, and some edges within black. There could be a lot more edges. But that's an example of what this graph might look like. And now, let me take G prime to be the subgraph consisting of edges with white on one end and black on the other end, so two different colors on the two edges, two endpoints of this edge. This is a subgraph. It's a subset of the edges of G. And note that every edge of G has probability 1/2 of being in G prime because, for this edge, there are four different possibilities for what the point colors might look like. And these four possibilities are equally likely. And exactly two of them will create an edge that falls in G prime. Now, knowing this by linearity of expectations, the number of edges of G prime-- here, G prime is a random subgraph. Where G is given, G prime depends on the coin flips that we made to assign colors. So it's a random subgraph. So the number of edges of G prime is a random variable. But this random variable has an expectation. And we can compute this expectation using linearity of expectations because each edge falls in G prime with probability 1/2. And therefore, the total expected number of edges in G prime is m over 2. This is the expectation. This is what happens on average. Thus, some instance of G prime has at least m over 2 edges. If on average we get m over 2 edges, then there must be some instance with at least m over 2 edges. And this G prime is what we're looking for. It has at least m over 2 edges. And it is bipartite because we've kept all the edges from the black side to the white side. And so the prime is bipartite. This finishes the proof of the theorem. And you see that by introducing randomness, we can prove something about the existence of a structure with desired properties. And it's a hallmark of the probabilistic method. And it's an important method that has lots of applications. This is one of the simplest examples of its applications.
|
MIT_18226_Probabilistic_Methods_in_Combinatorics_Fall_2024
|
Bounded_Differences_Inequality_aka_AzumaHoeffding_Inequality.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: In this video, we'll look at a powerful tool in the probabilistic method known as a bounded differences inequality, which also goes by the name Azuma-Hoeffding inequality. This inequality tells us that if we have a function of independent random variables and such that this function doesn't change very much if we just change one of the inputs of the function, then the output of this function, given the random input, is fairly concentrated around its mean. Let's look at the actual statement of this theorem. We have x1, a random variable taking values in the set or in the probability space omega 1, and so on. So x1, x2, to xn. So these are independent random variables. And it's very important that they're independent. We have a function, f, which takes as input n coordinates and outputs a real number. And the hypothesis of this theorem is that this function f satisfies the following property-- that if we look at two different outputs of f on inputs x and y such that x and y differ on exactly one coordinate. So by only changing a single coordinate of f, the assumption is that the value of f changes by no more than one. So f in some sense is fairly smooth to the fluctuations in the input. If you change only one input coordinate, then the output of f does not change too much, does not change by more than one. OK, what is the conclusion of the theorem? It says that the random variable obtained by evaluating f on these independent random input coordinates-- that's what we call this output, z-- and this random variable z satisfies the following concentration inequality. For every non-negative real lambda, the probability that z exceeds its expectation by at least lambda is at most this quantity here, which goes down rapidly as lambda gets large. And also, we also have a lower tail concentration bound, which says that the probability that z is significantly below its expectation by more than lambda, at least lambda, is this probability is upper bounded by this same quantity which, again, decays extremely quickly when lambda is large. So this is the statement of the bounded differences inequality. Again, the intuition here is that given independent inputs to a function which satisfies this property, that it does not change by more than one, upon changing any single coordinate, the output random variable is very concentrated around its mean. In the rest of this video, I want to present three applications of this inequality. The first application is meant to illustrate a very simple example of a function where we can apply the theorem. And this function is simply the function taking Boolean input, so n Boolean inputs, and outputs a real number obtained by simply adding up the input numbers. In other words, this is a sum of n different coin tosses, each coin toss resulting in a zero or a one. And you can check that this function has the property that we require in this theorem, namely that if you flip just one of the input coordinates, the output changes by-- well, in this case, exactly one, but certainly no more than one. Now we can apply the Azuma-Hoeffding inequality or the bounded differences inequality to this function, and that gets us a tail bound on the binomial distribution. This case, you know that the expectation of z is exactly n over 2, and it tells you that the deviation cannot exceed very much beyond something on the order of the square root of n. This bound is also known as Chernoff bound. And in fact, the proof of the bounded differences inequality is very similar to the proof of the Chernoff bound, which you can view in a different video. Let me now go on to the next example. The next example concerns a problem called the coupon collector problem. The setup is that we have independent random numbers s1 through sn, and they're each chosen uniformly from the numbers 1 through n. So these numbers are uniformly and independently chosen. So you can imagine a setup as having n different coupons. And each time you draw a random coupon, see what it is, return it to the box, and draw it again. And these s1 through sn's is a list of coupons that you draw from this box. And the random variable that we're interested in, z, is the number of missing coupons, the number of coupons that you have not seen through this process. In other words, this is the number of elements of 1 through n that are not among the elements s1 through sn. So this is a number of missing coupons. It is a random variable, because s1 through sn are random. We wish to understand how z is concentrated around its mean. And for that, we can apply the bounded differences inequality. Note that this function, given viewing this quantity as a function from s1 through sn as inputs to the output number, this function satisfies the required hypothesis. Namely, if you change one of the si's, you do not change the number of missing coupons by more than one. Number of coupons, missing coupons might not change at all. But it cannot change by more than one. So this means that we can apply the bounded differences inequality to deduce the conclusion that the probability that z deviates from its expectation by more than lambda is at most 2, because here we're using upper and lower bounds simultaneously. Times 2 to the minus e 2 lambda squared over n. And the expectation is something that we can calculate pretty easily using linearity of expectations of these n different numbers, these n different coupons. Each single coupon is missing with probability 1 minus 1 over n to the n, because this is the probability that a specific coupon, coupon i, is missing, which is the event that coupon i is not drawn in each of the n different random draws. And this quantity is very close to n over e. OK. So this is an application of the bounded differences inequality to a function which is not linear, like before. So the first example is a much simpler example because the function f is simply a sum of its inputs. And here, there's a more complicated function. Our last example involves an even subtler function. And here, the result is a classic theorem in probabilistic combinatorics due to Shamir and Spencer from the '80s. The theorem concerns the chromatic number of a random graph. So let z be the chromatic number of the random graph, G n, p OK, so G n, p is the Erdos-Rényi random graph obtained by taking n vertices and putting an edge between every pair of vertices with probability p. So throw a probability p coin for each possible edge independently and construct a random graph this way. And then z is the chromatic number of this random graph. So it is the minimum number of colors required to color all the vertices so that no two adjacent vertices receive the same color. This is some random variable. And this random variable is pretty complicated. It's pretty hard to analyze. But nevertheless, using the bounded differences inequality, let us deduce the following concentration bound, showing that z typically is not too far away from its expectation. And specifically, we have the bound saying that z deviates from expectation by more than this quantity here. And this event has probability at most 2 times e to the minus 2 lambda squared. OK, so let's prove this theorem. The interesting part of this proof is how to set up the function, f, so that we can apply the bounded differences inequality. So right now, z is some quantity which seems kind of complicated, and it's based on something that is random. So how can we phrase it in terms of independent random variables in a way so that we can apply the bounded differences inequality? Well, one way to do it, and it's a natural first attempt, is to view z as a function with n, choose two inputs, one input for each possible edge of the random graph. That is a valid choice in the sense that it satisfies the bounded differences condition, but it will not give the correct bound because it will turn out to have way too many variables in this function. We'll take a look at a different method that will be able to provide us with the desired bound. And there is some neat idea here on how to cluster the random variables together. So we will represent our graph on n vertices labeled 1 through n. And this graph, which has some edges and we'll need to represent these edges, we'll represent them as an element of the following product set-- omega 1 times omega 2-- so these are Cartesian products-- times omega 3. So 1, 2, omega n minus one. And how we're going to encode the graph using the element of this product set is as follows-- omega 1 will be the set 0 comma 1, and this set will record-- either 0 or 1, which one gets chosen-- will record whether there is an edge between the first vertex and the second vertex. So if we choose 0, then there's no edge between 1 and 2. If we choose 1, then there is an edge between 1 and 2. Omega 2 is the product set of 0, 1 with itself. And the two bits encodes whether there is an edge between 1 and 3, and also whether there is an edge between 2 and 3. That information is encoded in the element of omega 2 that we choose. And omega 3 is now 0 comma 1, raise to the third power. And here, these three bits record whether the three edges from 1, 2, and 3, to the fourth vertex are included or not included in this graph and so on. OK, so every graph on n-labeled vertices can be represented as an element of this product set and vice versa. Basically, we are clustering the edges together according to the right end point of its edge according to this vertex order of the vertices. So why is this useful? If we have two graphs, g and g prime, that differ on edges around only one vertex-- so if g and g prime differ on edges around only one vertex, then the chromatic number, or chi of g minus chi of g prime, this, in absolute value, is at most one. So if you have two graphs and one can be obtained from the other by modifying edges around a single vertex, then their chromatic numbers cannot differ by more than one. And the reason is that we could have just chosen a new color for the vertex that is involved, so we would not need to use more than one color change to go from one graph to another. OK, so this is a important fact that turns out to be incredibly useful. And it's useful because if we have a graph represented as an element of this product set, then having another graph that's obtained by just changing one coordinate results in a graph where the only changes to the graph itself are edges around a single vertex. And therefore, the function, f, that sends an element of this to an element of the reals-- basically, the chromatic number of the graph-- satisfies the bounded differences hypothesis. And by applying the theorem, we can then deduce the concentration bound here. So that finishes the proof of this theorem on the concentration of the chromatic number of a random graph. All right. So in this video, we saw the statement of the bounded differences inequality, which is a important and versatile tool that is used all over probabilistic combinatorics. It says that the concentration of a function of independent inputs-- or this, as long as the function has the property that it doesn't change very much if you only change a single coordinate, then the output as a random variable is highly concentrated around its mean. And we saw three different examples applying this bounded differences inequality.
|
SLAM_Course_2013
|
SLAM_Course_10_Grid_Maps_201314_Cyrill_Stachniss.txt
|
okay so welcome to the course today um we finished last week the cman filter uh and all its friends like UKF EKF uh information filter um sparse extended information filter and uh I promised you to now look into particle filtering as a second framework that we can use to perform um State estimation in the context of the simultaneous localization and mapping problem um before I'm actually going to start that um I would like to use the first hour today to introduce grip Maps which is an alternative representation for modeling the environment compared to features and feature locations and in the second hour today I will introduce the particle filter um with local with a localization problem which is an easier instance of the um State estimation problem that we face and then from next week on we will start really with particle filters for slam so today first hour grid Maps second hour um revisiting the particle filter and um explaining how localization works because we will we will build on top of that if we address the slam problem then next week okay so if you go back to the first lecture that I gave there was a taxonomy of different of the slam problem so explaining different aspects uh which we can use to distinguish slam algorithms and one of those um points there was that the some slam algorithm use features and others use volumetric representation or geometric representations of the environment in order to model the environment in order to estimate what the environment really looks like so far in this course we only loed into features so Landmark based slam so we have the assumption that there are some landmarks out in the environment and the robot has the ability to perceive those landmarks and it can estimate some quantity which is related to the location of those landmarks that can for example be um the bearing information so the heading information where the landmark is located relative to the orientation of the robot that can be the distance information that can be built at the same time so the laser scanner for example gives me both it gives me the distance to the obstacle and the direction where the uh Landmark not necessarily obstacle but the landmark is located if you have only a camera from a single measurement you typically only get the orientation you don't get the depth information unless you observe the landmark from different positions so that you can do something like a triangulation for example okay and that's what we what you know what you should comfortable with so for example using the XY position um estimating the XY position of a landmark of a tree trunk or something like this so this is the example of the Victoria Park data set we have seen where those landmark these yellow dots over here are the trunks of the trees that's one way of representing the environment in order to do that what we need to provide to the system is a technique to identify those landmarks so given our raw camera data our raw laser scanner uh data or Sona data whatever we have we need to have a program which takes the perceptual input the raw sensor data and transforms this into um a detector which detects landmarks and hopefully it's also able to say this is landmark number one this is landmark number two this is Mark number three so in order to do that I need to have some knowledge about the environment that the robot will be deployed in if I have no idea what the robot will see or perceive when I send it out to whatever mapping Mission um it's very hard to do that because um we have no idea what we're going to experience so an alternative approach is to say okay let's um simply let us not represent or use features to represent the environment but let's take a different representation of the environment let us kind of not only say identify Larks we can re-recognize let's take the whole perceptual input in order to um generate a map and that's pretty convenient if you use laser range data um because you can directly map um say obstacle points or um objects that cause the reflection of the of the laser beam and enter them or put them into a map representation and if you do that you will get something like you see down here whereas black means occupied space white means free space and the gray here means that's an part of the environment the robot has never seen and something like this you can see that's a corridor here or this are a corridor environment that looks similar to these architectural floor plan that you may um used to if you let's say want to rent an apartment you get the drawing of the floor plan um of course here in this Flor plan type pish things we have all the object that are in that environment so it's not the clean flaw plan but it's kind of for you if you see these maps at the first time it's probably the closest thing as a architectural drawing to this kind of representation or you can take real 3D data for example all 3D end points measured by the laser scanner and use this to represent the environment we will look today into this kind of representation here um that's an gret map or an occupancy gret map and we I will try to explain what are the key Concepts used in occupancy grid M mapping how do we actually compute such a representation and why it is useful as I said so far we used feature Maps or Landmark Maps that's kind of the standard choice if you use common filter based systems because you want to estimate the location of these features of these landmarks and it's also very compact representation for every Landmark I just need to store One X Y location which is pretty convenient um however we need to know before what we experience um and if we can't do that or we can't we don't have a reliable feature detector that's a problem um and therefore um as I said before there's an alternative way of doing that and one advantage is or one advantage not one alternative sorry is the grip map so what the key idea of this git map or occupancy git map the first thing is it says okay I don't have any predefined landmarks so I need to represent an information about is there something or is there nothing um for every possible location in the space and the easiest way to do that is to discretize the space into cells you can see this as an two-dimensional histogram or two dimensional you see there a matrix we have zeros if this is free space and one if there's an obstacle like small dots you paint or like an image there everything which is dark every dark pixel represents um an obstacle at a certain location and everything which is white represents free space at a certain location that's the way you can think about it so it's a discretization of the space like an image we can use the image itself and every pixel corresponds to a small area in the space let's say a 10x 10 cm area in the environment and so every pixel corresponds to one area in the environment and depending on the gray value I may say this is more likely to be an obstacle this is more likely to be free space that's kind of the easy easy way what the git map does so it's a discretization of the world and it's the rigid git so this GD doesn't whatever can't be stretched or modified so it's really a rigid GD in the environment and every cell represents one local area and then assumption that the grid map does it says every cell is either occupied or free occupied means there's an obstacle in there and free means there's nothing it's free space you may not see that in the uh if you look to that on the first side but it really means that the whole area that corresponds to a pixel is either fully occupied or fully free so if you have 10x 10 cm um grad cells and you have a pole like this guy over here standing here this assumption is clearly violated because the size of this pole is smaller than 10 x 10 cm which would be something around that size here um so only part of the cell is occupied that's an assumption that the git map or something the git map assumes that this is not the case everything is fully occupied or it's fully free but again that's an assumption what the grip map then does it it for every cell it can it stores the information is that cell free or occupied so it's a non Prometric representation of the object in there so it's not like a gaan distribution uh estimating the location of the um of an object as we did it for the landmark case where we for example had an Gan estimate about the location of the landmark and I can also if I get new measurements refine this position estimate that's something I do not have for the grid map for every grid cell estimate is this free space or is this occupied space this again if you go for large scale maps of course um is less compact than features because even if you have large free space areas for every local for every local region say 10 x 10 20 x 20 or 5x 5 cm something along these lines you need to represent a variable telling me is this free space or is this um occupied space so this is a disadvantage it's computationally more costly to maintain such a grid structure um but it doesn't rely on a feature detector so I get completely rid of the need of Rec having some Landmark detector or feature detector which I use on my raw sensor data so I can work directly on the Raw sensor data which is a nice Advantage so whatever I experience um of course also some assumption in there if you have a 2d laser and something is not in the 2D plane that the laser scanner measures you won't observe it but this is kind of like let's say similar to the field of view of your camera if it's something's outside the field of view of your camera you won't see it um we have that assumption still in there but we don't need to care about um what type of obstacle is that if it reflects the laser beam whatever shape it has it will be entered to my map representation so this's an example how that looks like so the this is uh building 79 so that's my office over here um and there actually still an old map so so this room has been converted into offices and some structural rearrangements have been made over the last years which are not reflected by that map but this is a typical grip map that you obtain if you take a standard small Piney robot in this case with a standard s Laser Rangefinder on it you steer it through the environment and you build a map of the environment that's what you going to see yes please what's the grid size um that should be I think 10 x 10 cm or 5x 5 cm so I can't it's quite quite a few years ago that I recorded that and even so you record the raw data and the question is in which resolution do you render it so I guess it's I would expect 5x 5 cm this was rendered but um I can't guarantee that so that's kind of this should be something like 35 36 m in size the whole Corridor um and if you kind of zoom in so this kind of the zoomed view you can see here also some artifacts you see this grayish things over here any idea where this grayish area here comes from sensor noise one possibility is sensor noise what else the probability of a frequency is not one but why is it not one because here and there it's pretty close to one here it's pretty close to zero but not doesn't know yeah that that has an effect on that that's true but there's something else that were pretty similar to the noise issue I mean that's true that is part of this effect you're right but there's another effect I would like you to identify we said we have this grid cells and every grid cell is let's say a 10 by 10 cm rectangle the Assumption the grit mate M this is either fully occupied or fully free and this assumption for example can never be justified if you have a wall which is say 45° rotated with respect to the orientation of your GD you can see this affected a little bit so the grid orientation is like here the orientation of the image but the map do this wall for example is not perfectly aligned with the xaxis or y- axis of my GD so it's something whatever 15 20° rotated and therefore even if we try to perfectly align the grid structure and the the walls we won't get a perfect alignment and this is also some of the effect why they why this kind of smears out a little bit or can smear out a little bit because simply like there are cells which are just partly occupied depending from which direction you measure it um you may measure it as free or occupied which can lead to these effects okay so let's revisit the assumptions not revisit let's have a look into the assumption that the grit map does in a little bit more detail so the first thing that's what we said right now um every the area that corresponds to a grid cell is supposed to be fully occupied or fully free there not like 50% occupied or something like that so if we have let's say here 4x4 grid this areas is occupied this is free space again it's a discretization this is an approximation but if we do this if we do this build that grid kind of fine enough we actually get a pretty good representation of the scene similar to um uh yeah other representations other disc realizations which let's say other sensor types use so it's actually kind of if if the grid dis gradation is small enough let not that much of an issue of course if you have 1 by one meter cells then so the the bigger the larger cell is the larger the approximation error but you can kind of control it by setting this representation uh this resolution sorry okay so how do we represent that mathematically or internally in our um in our representation of the world how occupancy git Maps do that is for every of those cells they maintain a binary random variable so binary means there are only two possible states which is free or occupied and we we aim at estimating the probability that a cell is occupied so if these are my grid cells let's say this is MI MJ so these are just m index refers to one of those cells over here if we have a probability towards one it means it's very very very likely or if P equals 1 I'm sure that this cell is occupied if the value goes toward zero it means I'm pretty sure or I'm very I'm 100% sure certain that this cell is free P of MJ equals Zer means I'm sure it's free space if it's close to zero I'm very very uh I'm very confident that it is actually free space so that's the way we do that so we have a binary random variable for every individual grid cell that's a modeling assumption clearly so we don't have to do it that way but it's not too bad idea for doing it that way maybe there are other Alternatives doing it better mathematically more sound estimating which part of the cell may be occupied so you but then you need to store more information per grid cell now that's kind of the standard Choice a binary random variable per cell and so we have that as you said before if P of Mi equals 1 means the cell is occupied if P of Mi equals z means the cell is definitely not occupied if it's a value close to one it means I'm pretty sure it's occupied it's close to zero I'm pretty sure it's free space and the probability is 0. five means I simply have no idea so it's the same CH that this cell is occupied then this cell is free so it's kind of the maximum the state of Maximum uncertainty is p Mi equal .5 okay any questions about this assumption or this Assumption of individual grid cells and the fact of using a binary random variable to to model that yes please probability would be 0.6 0.6 um that means the probability that this cell is occupied is 6 me what it means is the the representation that you generate means you're estimating what the environment looks like if you have very no sensor data let's say you see this cell so you you measure that cell and you see it five times occupied and four times free simply hard for you to make a good statement is it free or is it occupied so you say okay the best estimate I have is I'm I'm more kind of I'm more confident that this cell is occupied but I'm not very certain about that okay the Assumption number two that the occupancy grou map does at least in its kind of standard variance so there ex tension of that which relax those assumptions but the standard occupancy G mapping algorithm or technique assumes that the world is static so there's no dynamic in the environment there are no people walking around in the environment no one is rearranging the scene while I'm mapping the space so we clearly can have sensor noise so I'm not assumed to have perfect input data but I assume that no one changes the environment robot's not bumping into obstacles moving them around or something like this so this cell is always occupied this cell is always free there's no one rearranging the environment I can put additional modeling effort into this mapping algorithm which you will see very soon and then I may be able to take into account Dynamic changes but standard algorithm says the world is static that's this turns out to be important when I do my state estimation measuring those cell multiple times to make the statement okay that sensor noise actually seeing it whatever four times is free at five times is occupied it's not that someone changed the state of the world and the third assumption is that all those cells are independent of each other that means that the binary random variables that I maintain are independent of each other so what does that mean if I don't know anything about the world I only know so I don't know anything about any cell except that this cell is occupied if the cells are independent of each other it means it doesn't help me to estimate the state of this cell of this cell of this cell or any other cell in the map so given me that I know parts of the environment it doesn't help me to estimate the remaining parts for example can see in real environment can see that as an strong assumption an assumption which is not which doesn't hold why is this assumption wrong if I measure the wall over here and I see an obstacle it's quite likely that the cell nearby is also occupied it doesn't necessarily has to be the case but if I make Statistics over it if I have a cell which is occupied what's the likelihood that the neighboring cell is occupied um yeah it's more likely than having a cell which appears in free space or having a cell which is free it's quite likely that his neighbors are also free more likely that if they were occupied this just holds from the fact that the way we as humans build our environment for example the environment has a certain structure it's not just random clutter if you would have random clutter that's perfect that will be perfect for the occupancy mapping assumptions but as we don't live in perfectly random clutter um this is not true it doesn't hold in reality it's a modeling assumption that we do there's nothing bad about bad about doing assumptions the only thing is you need to be aware of your assumption and if something breaks you may need to revisit your assumptions but doing modeling assumptions per se is nothing which is very very bad um we always have to make assumptions otherwise we can't model everything okay what does it mean if we have independent random variables for my probability distribution you can calate [Music] theity that's a pretty strong statement in the sense that um even if they would be dependent on each other I could use measurements to estimate their state but you just need direct measurement what do you mean with a direct measurement that you just observe the robot that he find out that this point is occupied this measurement enough for cating the property of I don't fully agree with your statement in the sense that obviously if you measure something if you measure a cell you measure it as occupied that helps you to increase the probability that this cell is occupied and it doesn't matter if the cells are dependent or independent of each other maybe you don't express yourself very precisely maybe you we try again you mean the right thing it's related to that but it's not what you said means that you need you don't need the location of the robot in previous current m I need to know as we will see later on I need to know where the robot is in order to to build a map and also actually this algorithm makes the assumption that the the position of the robot is perfectly known um so the important thing is if you think about this grit this occupany git map now in this example consisting of 16 cells so 16 independent binary random variables it means if I compute the probability of a map which is a joint of those um 16 cells I can compute it as a product of the individual ones so the probability distribution about the map is given by the product of the individual probability distributions that means they're independent of each other if I know M1 it doesn't help me at all to estimate M2 or any other G cell so the joint can be broken up into a product of the individual probability distribution every Mi here is a binary random variable but m is not a binary random variable it's a collection of binary random variables okay is that kind of clear what that means okay so an example let's say you have a 4x4 um 4x4 world and I have some occupancy probabil ities and I want to estimate what's the probability that for a given map so let's say um so let's example here okay so let's say we have a grid map which consists of four cells this is M1 M2 M3 M4 and let's say I just take random values M1 equals 0.9 M2 = 0.5 M3 = 0.8 and M4 equals 0.1 they're just example values of what my current estimate of the world looks like so my representation would say here 0.9 0.5 0.8 0.1 that's my estimate let's say the best estimate I have and now I want to know this my what my representation that what the probability that the world looks like this that means these two cells are occupied and those two cells are free what's the probability for this to be true can you easily compute that as we said P of M so let's say this is now m this is one specific instance so maybe clearer to you if write m equals small M so what's the probability that the random variable map is exactly this instant M as we said before that is the product from I equals 1 to four four cells of P of okay let's write it in mi equals m i so this is kind of the index of those cells okay can now easily compute that the probability that M1 so this cell here is occupied so take this value is9 09 okay I'm looking now to M2 what's the probability that M2 is free it's 1 -.5 = .5 so let's write it be 1 - 0.5 M3 this should be occupied what's the likelihood or the probability that M3 is occupied 0.8 times what the probability that M4 is free exactly 1 minus 0.1 and so this is then 0.9 * 0.5 * 0.8 * 0.9 that's I don't know to compute that I haven't done that before just an ad hog example but that's the probability that the world looks like this this is what our occupancy grip map tells us and the assumption that um we can actually split those guys up into individual probability distribution make this makes this computation so easy so we can just go over individual cells in whatever order we want and compute that without having to take into account if this cell has a certain value this influences also the probability of this cell is something we don't have to consider based on the assumption that we do that the grid maps are the grid cells are independent of each other is it kind of clear is everyone everything is it clear to everyone what are the key three assumptions that we did and what does kind of the the map representation means what is useful for okay great so the next important question is how can we actually obtain a map given data right that's what we are interesting in interested in here so what is so given sensor data and here we assume given the poses of the robot given what other the map look like so this is not slam what we do here we assume to know the poses of the robot therefore this is also called mapping with known poses to stress this fact so we assume we know where the robot was at every point in time and given we have that information we want to estimate our map the reason why we use this is later on of course we will relax this assumption in the particle filter or use the particle filter in a smart way to deal with this problem but in the the basic occupancy grip mapping framework assumes that the poses of the robot are known the sensor data can still be noisy so this is just my sensor data given my sensor data um but um the posts of the robot are known this assumption that the occupancy git mapping algorithm does what we want to now do is we want to compute the probability for map so let's say that the B looks like this in this example given my sensor data let's say laser range observations or Sona observations and given I know where these where these measurements have been taken okay is it clear clear what's written here and the key question is now how do we do that estimation and the the thing what we can do is we can break that up into these individual cells to these individual grid cells and now have to estimate this binary random variable so given my data what the occupancy probability of one specific cell and that's what we are going to look into to do that so it's a binary random variable how do we do state estimation based on sensor data in general what have we've done in the course so far what's that kind of more from the general perspective yes we use the base filter so let's use a base filter here how can we simplify if you say okay we agreed now that it makes sense to use a base filter to estimate that variable how can we simplify our problem what makes a problem here slightly distinct from the problems we had before we assume sorry no no repeat I think there's no prediction step exactly so we don't need a prediction step why don't we don't need a prediction step that's why want to because I forgot the reason okay yeah the reason for that it's pretty good that you forget the reason but give the right answer perfect um so the reason for that was our assumption number two the world is static so we made the Assumption the world is static if the world is static I don't need to do any prediction because there's no change of the world so there's no action no one modifies the environment so only have correction steps I don't have prediction steps and there's something which is also called a binary based filter for the static state or a base filter for static State um or binary based filter static State binary based filter whatever you find all these different ways for naming it so the thing is now we don't have any controls we only have observations and that kind of simplifies our estimation problem as we will see in a minute um so what I want to do is I want to go through the basic equations again and derive the static State binary based filter which is kind of exactly the problem that we needing need to address here so the assumptions here no prediction step and we only have two possible States it's binary variable so occupied of free right okay now let's do that what is the probability whatever that this m of Mi I given our data the first thing we do we apply base rule so we move z t here so we swap Mi and ZT and then just the standard application of Base rule okay so the first thing we do is again doing a mark of assumption in this term over here so if I know if I perfectly know what the state is I'm not interested in the past observation and the past poses so I can get rid of Z1 to T minus one and X1 to T minus one because I know the state of the cell standard Mark of assumption and the next thing I can do if I look to this equation over here is I only have the observation up to that T minus one I additionally have the pose at T minus one but no observation at that point so it doesn't help me to estimate the state of that random variable so I can get rid of XT over here right because there was no observation taken at XT so I know where the robot was in the future but I have no observation for that position so I can get rid of this XT okay I take that so this term has been simplified only XT Wes over here because we get rid of the previous observations and positions given we know the state of the cell and here we got rid of XT because it doesn't help me to do my estimation okay so I simplified those terms over here the next thing I want to do is I want to look into the first term over here this guy and expand that further what I do I again apply base rule to this first term so take this term this part only apply again base rule swapping that t and Mi so this turns into Mi given the current observation the current position of the robot times um the um the like of the measurement given the pose without any map information divid by P of Mi given I know where the robot was at some point in time okay so just I just kind of take this equation and put it in here this ends up with this equation over here so just apply base rule on this term so it's kind of this part here is a stuff which came in from Bas rule okay so if I look to the individual terms I can look to this term over here how can we simplify this term on the bottom can we simplify that guy one estimate the state one cell given I know where the robot was but given no observation so XT doesn't help at all under the assumption that I maybe not standing on the on the position but and then assuming that the robot has a certain extent and it must be free but if I ignore this fact it doesn't help me so I can get rid of this XT over here so I can get the next line over here I just changed this this expression here and this expression over here okay that's it that's what I can do H how should I continue here so what I assumed so far is it's a static state right there's no controls no controls over here assumed everything is static which other quantity and I and I and I use that the cells are independent of each other because I'm just looking into one cell not the whole map which assumption um did I not exploit so far of the three assumptions that we had in the beginning so exactly we have a binary state so it can be either occupied or it can be free okay so what I did here is p of Mi this is occupied I can do exactly the same thing for the other way P of Mi free or not P of Mi so I can do the same thing for having not Mi over here and do exactly the same derivation here as well if I do that I get exactly the same thing here except that every m is not M so this is occupied this means not occupied I just use the term free because not occupied sounds a little bit stupid but um if you would read it correctly you would say occupied not occupied same as occupied free okay now what we have and the key trick now is to say okay I can also not simplify this term first further as it is let's take the ratio of these two terms and see how I can do with it so I take this term divided by this term and see if this helps me to simplify my problem and as you will see in a few seconds it does so we do we just compute the ratio of this these probabilities we have this big term over here we can see oh there are some terms which appear over here which appear down here that's perfect we can get rid of them so we can get rid of P of ZT given XT appears here and here and we can get rid of this term down here P of ZT given all the previous observations and the positions of the robot course your P here and here so that's perfect so I can actually get rid of this guy and get rid of this guy okay so the next thing I can do is I can exploit the fact that I have a binary random variable and and P of occupied is simply 1 minus P of 3 or the other way around p of3 is 1 minus P of occupied so what I can do now I can replace every expression which contains not Mi by 1 minus P of Mi right exploiting the assumption that I did and this gives me this expression over here so this term is simply 1 minus the other term same here same here so I end up with this equation over here again I'm so far only talking about ratios I just have ratios of probabilities but if I inspect these terms a little bit more I may see hm this term over here actually only uses the current observation and the current position of the robot right nothing else is used in here this term over here only uses all the previous observation and the previous postes of the robot and this term over here says just what's the map in general if I don't have any sensor observation so this is if you think about the recursive base filter a recursive term this is a term for the current observation and that is kind of just some prior information so you can say hey in to that ratio I have one term which uses the current observation I have one recursive term which takes into account all the past and I just have some prior information that's really great because I can do recursive State estimation which was so successfully done in um in in our EKF or whatever framework we did for the for the recursive Bas filter so far that's interesting so we have a prior agressive term and the current observation the main question is now how do we get from our ratio of probabilities back to probabilities that's actually this term pretty simple because you can see this P of xid 1 minus P of X is some term so we can write P of x / 1us P of X is some term Y how do we recover the probability this can be done in a very very straightforward setting so we get this expression in the beginning we can multiply with one minus P of X so we get this expression over here just multiplying both sides of the equation with 1 minus P ofx we end up here then um we can say that um we multiply that um that term with uh so this gives you y 1 * y minus y * P of X so I extract 1 plus uh I put the no sorry I put here um plus y * P of X this goes to the other side I take out P of X so it's p of x * 1 + y = y I divide by 1 + y I end up with this term over here so this y now is kind of this large expression over here and in order to not have y twice in here I can reformulate it in this way so P of of X is 1 / 1 + 1 / y so I can use this now and say okay I just put in the Y I had before just more complex way of writing that and I end up with having this expression so to estimate the um to estimate this term over here from my sensor data I just have this term over here and this is a recursive term which needs to be recursively expanded for every measurement okay so if I do in that way um it's a little bit inefficient because I have to always divide through those terms in here of all these products I need to compute and all these divisions which are I mean especially slow on a computer in the old days take into account this was developed in the whatever 85 ' 89 something like this um that was a pretty expensive operation at these time so people said okay let's simply do that in a smarter way that is much more efficient to compute and what Mor and alas who were the two guys who developed this approach came up with is a so-called log ODS notation and they're taking into account the log ODS notation to simplify this term what does log ODS notation mean it's simply if I have the ratio of of two of a probability and its inverse probability just put the logarithm in front so I had this term over here and the log OD just say just kind of this ter expression of L here it's just a log of this term over here the if I have a logarithm if I have a product that turns into a sum it gets it much easier and um so I just compute everything in this log notation or log ODS notation it's called so there a formal definition is L of X is log P of x 1 minus P of X and if I have a log odds I can recover the probability with this expression over here and the key thing is if I I mean of course Computing the logarithm and the exponential is also computationally costly at least on old computers or um if you need do this operation very often can also can also be costly but if I just kind of can I estimate all all the map in the logot notation then in the end just compute for every cell the probability that's much more efficient so we take this logot notation and um then come up directly with an simple algorithm because in the logot notation every product turns into a sum so the log odds of this term is this term over here which is also called the inverse sensor model because it's kind of the sensor model would is p of ZT given M and x and here is p of M given that in X so what's called the inverse sensor model we have our recursive term over here and we just have our prior so just kind of Prior if I don't have any data what do I assume about a grid cell it's more likely to be free more likely to be occupied or simply 0. five in case I don't know so in short we can write this L at time t of the grid cell I is the inverse sensor model plus the recursive term minus the prior so these are all terms which are extremely efficient to compute and just need just need to compute sums it's one of the most efficient Map update that you can find we can use this then to directly Define the occupancy mapping algorithm um it's pretty simple to do it just iterates over all cells in the map it says if this cell I is in the perceptual field of my sensor observation so in the field of view that my laser range finder covers or my Sonar covers I need to update it they update it by saying okay I take its old Value Plus the inverse sensor model minus the prior so this is super super efficient because we only have to do compute sums it's kind of the key trick and if this is not the case so if it's not in the field of you I just keep the old value and um that's it so that's really really efficient and really really simple to do again so this was proposed in the mid to late ' 80s by Mor and alas actually zans I think holds a patent patent on this uh on this approach and whenever you use an occupancy mapping algorithm in theory at least you should be aware that um zens is holding a patent on that although you may argue that this is so frequently used in robotics that you may argue if this patent is still is valid or not but that's a different story um it was originally developed for dealing with noisy sonar data so sonar data can be very noisy um so it's specially designed to deal with noise in the data and um it's still frequently used today to once you have post Corrections you have correct computed corrected poses to render the occupancy grip mappage results from that it's called often referred to therefore as mapping with known poses to distinguish it from slam okay so the question is still how does these inverse sensor model look like right let kind of the key thing if you go back how does this term here look like we haven't discussed that at all yet this obviously depends dramatically on the center let's look to a sonar so this is a robot this is a sonar cone so have the cone of the signal the signal gets reflected and I can measure the distance to the obstacle these are part of the environment where the the sonar signal passed so this is there was no reflection so it's more likely to be free therefore these cells are drawn in white or whitish and there was a reflection somewhere here so these cells are more likely to be occupied right that's kind of the intuitive way for doing that and you can actually design a function which exactly has those probabilities and one of the standard approaches uh I would like to show here how we can do this this is just the sensor model so how the sensor data should be interpreted so it strongly depends on your s so the parameters of this model strongly depend on the properties of your sensor how accurate is your sensor and so on so let's look just in the in the following just on the cells which are here along the the optical Axis or the main axis of the of the scanner if this is a distance to the ob so the kind of the distance for the here that's the robot let's say this is the measured distance Z over here so in front of the um of the of the measur distance so if I measure 2 meters everything up to whatever 1 meter 90 is very likely to be free from 190 on I may not be 100% sure because it can be in the area where there may be a lot of sensor noise so the closer I get to that the more likely this to be occupied until we reach I'm pretty quite sure it's occupied then I assume for a certain distance the cell is occupied this kind of the average size of an obstacle or the average width of my uh walls or thickness of my walls with the average thickness of the walls I have kind of this guy over here and behind that I'm kind of end up at the prior typically 0. five because I can't see through walls so I don't want to update the cells behind that okay if I use um this model so everything in front is kind of free everything in this area is more likely to be occupied than free and everything here I simply don't know I can't look through walls miss my solar sensor if I have a sensor which can look through walls I may have different values over here but that's nothing I typically have in standard settings so what I can do now along the optical axis here I update the cells in this way so here it's kind of free free free fre free fre free could be a bit gray occupied and I don't know this exactly corresponds to these three facts over here this is an example how this looks with real world data so this is my the map I had at time step T minus one and or T minus whatever 18 observations t - 18 T - 17 16 14 15 14 and so on and these are all the inverse sensor models so just given one observation what do the grid cells look like you see that some are uh more grayish and some are more whitish this is kind of how certain am I about an observation the shorter I measure typically the more accurate the um the the sonar is so the more the further I'm away the more uncertain I I get and therefore I have all these small all kind of see there's local maps that you comput from one scan if you combine them you end up with this map so from this map you get additional information and you end up having this map down here that's actually uh a reable data set that's actually the first data set I recorded myself back in 2002 something like that 2003 maybe uh with the first robot that we had in the lab Albert which had 24 um sonar sensors around here which gave me these proximity readings and that is one part of building 7 9 um so that was my office at that time the kitchen another office and that's the map you obtain from that given you know the posters of the robot so that's the occupancy grit map and if you want to estimate what the most likely map from that you just clip the occupancy values between to zero or one so everything which is bigger than 0.5 becomes one everything which is smaller than 0.5 um becomes um zero and then kind of that is a most likely state that the world looks like given the Sens of data that I have okay you can see these thick walls here and even these weird effects here which were kind of just wrong measurements that were measured too long some of the measurements here you have a little bit weird effect in there which was just a result from the sensor noise that you have so the longer you would observe the environment these effects may vanish although these effects are hard to eliminate because typically the scans end up before so it's very unlikely to re observe these parts again just you have seen just once due to the sensor are you don't re observe it um these are kind of effect to typically have in those occupancy grid Maps if you go to a laser range finder instead of a Sona just have one Ray through the environment and it's to be much more accurate than um uh than a sonar sensor so the inverse sensor model typically looks something like this for Laser Rangefinder I'm extreme I'm very very sure it's free space until it reach the obstacle that I'm pretty sure it's occupied and it's I don't update it behind that this kind of this R can be the thickness of my my grid resolution because it's typically unless the grid resolution is extremely small and interferes with a sensor noise but if you have let's say a 5 cm git cell um that's typically a pretty good model for Laser Rangefinder then you obtain this kind of model um if you then steer robot through the environment you obtain kind of this kind of of of of data of endpoints you see there are some clutter around the space these were here in this case just caused by a few people walking by if you then take this raw sensor data and turn it into a map that's what the map looks like and you can see here that most of the sensor noise the people people walking by is actually eliminated just because you have seen the space often so people walks by this is a very short amount of time compared to the the the the number of measurement that the robot actually takes about this part of the space so we end up with this kind of map and that's kind of a standard occupancy grid map that you get from laser data or just another example this is one of those fancy buildings at MIT the seay building where no wall is straight it actually looks pretty straight but in 3D it's not straight at all um and one of those um spaces map with a robot and that's what an typical occupancy grip map looks like so you can say is this cell free or occupied let's check the value here it's more likely to be free than occupied that's a way you can use this this is just an example of whole short video that's um one exploration experiment in this case of building in building 101 second floor if I recall that correctly it's also whatever 2004 2005 with the robot driving around with a laser scanner and building a map of the environment and um again doing everything assuming known postes you could see here you can see this kind of mismatch here there's small pose misalignment therefore the map is not perfectly estimated here whatever few centimeters a few grit CS are off that's just something which results from the fact that we assume known poses okay to sum up occupancy grip mapping occupancy grid Maps discretize the space into cells to a number of cells all the cells are independent of each other everyone uses a binary random variable with a static State assumption to compute that and we use such a static State binary based filter for every individual grid cell and estimate um the likelihood or the probability probability that this cell is occupied or that this cell is free we can do this very efficiently in a recursive update scheme especially if usess loog ODS notation it's just summing up summing up values there's not more into that and there's no need for predefined features that's kind of a nice representation that's why especially we in fiber like this kind of representation and a lot of maps or mapping system that we develop actually rely on these grip Maps or occupancy grip Maps or variants of this occupancy grip map there are extension for that how to deal with noisy data how to deal with changes in the environment so there are have also been ex extensions try to relax the assumption that the cells are independent of each other but then it quite quickly becomes computationally um problematic what however don't want to ignore at that point is what happens what I call when grip mapping meets reality grip man meets reality so it means I take a real robot not necessarily very well calibrated I record a data set and build a map you may get something like that so and if that looks doesn't look like a nice map and if I actually show that how the map is built up you can quite quickly identify what the problem in here is the problem in here is the assumption that we have known poses is fundamentally flawed so this is everything else but not known poses it's actually data set that um dur hell um recorded uh and with a b2r robots similar to this to the Albert robot um the odometry a systematic error in the odometry so you can see it always turned slightly to the right it drift to the right but again was not perfectly calibrated but that's what you get so the question is how can we actually fix that what's the easiest way to fix this because it's clear that such a map is completely useless for doing um for doing for for obtaining such a map this map can't be used for any navigation task um which by the way would hold for the same thing if if you would use landmarks here and do mapping with non poses with landmarks you would obtain exactly the same result unless you use the full EKF and solve the slam problem so the problem is motion is noisy and we can't ignore it if we ignore the motion noise we'll simply fail and um but depending on what sensor we have especially with a laser range finder it turns out that this sensor is actually pretty accurate and so the idea is to kind of use the sensor information itself to improve the post estimate and one technique for doing this is what we just often refer to as scan matching or incremental scan alignment just says okay just take scans which are recorded one after the other in time and try to align them so that they best match on top of each other and they just use this pose as kind of a correction for the odometry information so don't use odometry to estimate the new post just to kind of align the skin so that they best overlap best fit with each other and um then um use these improved poses and apply the same algorithm and that's something what you can do what you do basically in every step is you come you try to find the best pose XT or XT star the outcome of that so that the product of the observation likelihood given the map we built so far and the odometry is maximized this kind of it's a mixture between how well do the scans overlap or does the scan does the current scan overlap with um the the map I build so far or with my previous scan and um times the the uncertainty I from my motion model so I want to still want to take odometry into account so and I kind of want to maximize this term over here and um there are various different ways for doing that um just to give you kind of a little bit of an illustration how that could look like so these are two poses so the red one um resulted in one scan and the blue one generated the other scan oh no yeah exactly and the question is how uh what you try to do we want to align those two skin so that they we get the best alignment also taking into account the odometry information and if I do that I may obtain some plot like this so what you see here these are these all these different of tiles over here are X rep represents the translation X and Y of those two locations and um every Tire represents a different orientation and every color value is kind of what's the likelihood or the the the value uh actually of this kind of cost function so it's kind of an extensive search if you want to I take my robot position and I can okay if I move it 1 cm to the right I evaluate how how big is my score 2 cm to the right 3 cm do that on a grid pattern then I get exactly one of those tiles and then I rotate the robot a little bit and repeat the same process I rotate a little bit more and repeat the same process and then I get these tiles and then I simply pick the one the the position among all tiles which has the maximum value in this case this is this value over here here and this is my alignment of those two scans that's one way for doing that there are different ways for comp for maximizing this function the most prominent or very prominent one is So-Cal iterative closest point algorithm which just makes kind of takes the previous scan and the current scan and makes a data ass station between the end points and then um tries to obtain the uh translation and rotation which minimizes the square es between those end points and use this SVD for example to do this computation those of you who attended robotics one have seen this technique um there are whatever a full zoo of different cost functions you may take pointto Point alignments scan to scan alignments point to plane alignments plane to point huge variant of different things doing this with features taking into account outliers that some of these endpoints don't have a partner doing it's called a r SE procedure for getting rid of outliers um so there's a huge large zoo of different techniques how you can do scan matching um I don't want to go into the details here but I just want to tell you you need this scan matching technique you know to make an incremental alignment to come up with reasonable poses to do this occupancy grip mapping in reality this is a small example how that scan alignment looks like for two 3D scans so this is actually a view on campus so this is building 79 this is um the I don't know the exact number here um but here's the glass house here's the entrance to the parking spot here's the um uh shanka uh the access control to the parking lot and this is a one of those Lambs these are the trees um I think actually they just removed this tree over here so it's not anymore in in the real world but you can see here kind of the the blue scin is the reference scan the robot drove a little bit recorded the red scan you can see here this alignment procedure align align aligns those two scans and then until when you found the best match you take this as your neur robot POS and you continue and again match the new scan again the previous one and then you continue like that if you do that raw odometry what we had before and scan matching one scan against the previous scan um you can see the diff can clearly tell that this is an improvement with respect to the um to the other map still you see that scan matching is not perfect especially if the robot revisits the same place of the environment you can see there are multiple uh misalignments here because you see multiple walls because just here you see three walls for example just the RO passed by three times so the correction is not perfect but it's much better than this correction over here so if you want to do build occupancy grid Maps you have to do some scan match at least some scan matching the minimum you typically need to do um in order to build accurate maps and we can compare the motion model so this just generating samples from a distribution according to odometry so this was a trajectory that the robot took and this sample set represents the uncertainty that you have if you compare that with a scan matching model you can see that the scan matching is typically has a much smaller uncertainty compared to the um to the raw odometry model that was kind of scan matching in five minutes we don't need to know the details of scan matching I mean this is something which is covered by the introduction to the mobile robotics course I just kind of reported you here at least give you a very very rough idea on how you may solve that and um let you know that there is a technique and you need this technique in order to solve this problem so what the key idea is to use scan matching just match the current scan against the previous scan the previous map or whatever we have from previous observations and try to make a an alignment which is as good as possible taking into account my sensor model and myometry information and trying to kind of greedily maximize um the the cost function that I use in to come up with a better estimate this typically gives me locally accurate Maps so over last end scans this map looks pretty consistent but let's say if I revisit a place or driveth through environment for long periods of time I may get inconsistent over here um so this technique is often not sufficient to build large scale Maps but for small scale maps that actually works quite nicely want more about that um again my reference to literature there's a static State binary based filter the problemistic robotics book and also grid mapping itself you find in the problemistic robotics book and um if you want to know more on scan matching there are different there's as I said tons of papers reporting on that um this is kind of the standard ICP reference and the um all corative scan matching is also kind of a nice Technique we Al the figures that you have seen which sees on this GD in a hierarchical manner is uh from the approach of edin Olson that's it from my side about grip maps and occupancy grrip mapping um oh sorry scan matching and occupancy gri mapping are there any questions about this because we will use this as a building block later on in our slam algorithms to um to build maps and solve the slam problem it's kind of you may not see the connection to slam right now as well you should see the connection to building a map um so this is a way of how to build a map once we have the poses it's kind of a building block that we will need um be definitely before Christmas so there any question about that okay perfect and thanks for that
|
SLAM_Course_2013
|
SLAMCourse_00_Course_Introduction_201314_Cyrill_Stachniss.txt
|
let's simply start welcome to the course on robot mapping um so today I would like to give first a very very short introduction on what we are going to do here in that course so which topics do we cover and then give uh short and kind of rather informal introduction into robot mapping so some of you have attended the robotics one course last term so some of the concepts you see here will look familiar although we dive much more into the details and um also cover an additional parad which hasn't been addressed in the robotics one course last year and we actually will spend most of the time with new topics but some of the things you will see here may sound familiar but as I said we're going more to the details and my plan is also to do more Hands-On work so there's more programming homework Pro programming assignments for you to really develop in the end state-of-the-art systems forward mapping so roughly the topics we are going to cover here in that course is first as I said in introduction to simultaneous localization and mapping and then we will revisit the kman filter in all his friends not all his friends but a large number of his friends so that's the extended Caron filter the uncentered calman filter um and then we also look into information filters which is basically calman filtering in information space so we kind of using the inverse form of the gum distribution which we used in the um Caron filter but we will start all that from the basics on so even if you have not attended the robotics one course um there's more material here then in the introduction to mobile robotics course and we will go bit more into the details but we will start everything from scratch so there's no background knowled that you need to have but it it may help if you've seen things or the basic concepts already before then we will look into particle filtering and but Focus here especially on the aspects of particle filters for um robot mapping for slam and these are mainly the r black wise particle filter um which we go into the details and then we um start with aror minimization approach and that's something I guess most of you haven't heard before or don't know anything about and um these are the let's say most frequently used techniques if you look to robot mapping systems today or toic conferences that address robot mapping they most of them actually rely on these Le Square AR minimization principle for building um accurate maps of the environment and we look we look into different flavors of these um slam systems these are hierarchical approach these are techniques which deal with um error in the input data so the input data is imperfect as all data is if it is collected with sensors and has let's say very very large errors um then we'll look into what it's called a slam front end so how do robots actually interpret their sensor data how do they re see a landmark how do they re-recognize a landmark in order to um make data ass station so to say what I've SE what I've seen now is something what I've seen whatever an hour ago and in order to make that Association of their observations and um if there is enough time so these last three aspects here depend on time we will also look into um appearance-based approaches um and I will probably also wanted to go a little bit more into the special aspects of connect based slam which is something which um came up over the past few years since the connect sensor is available that most of you know which is a camera and at the same time a quite nice depth sensor so you have visual information and depth information available at the same time and this allows to um build quite accurate high high resolution models and at the same time this sensor comes at a very very low price as it is mainly used for um techniques um in for for gaming so these sensors typically cost 1001 120 and so you can can actually buy this sensor at a very cheap price um and use this for doing slime so what's what do I want to do with this course so what's my what's my kind of intention behind that I would like to give you an introduction to robot mapping and slam and the idea is that after you have attended that course you kind of know all the key Milestones that happened over the past whatever 20 years in robot mapping so that you of course you will not know every single technique that's and the course would be much longer but at least you should know the key milestones and understand the key um concepts of robot mapping techniques so if you go to a conference and you attend a robot mapping talk you should at least have an idea what the person you see there is talking about you should be able to understand not fully understand but at least get the key idea what's kind of the novelty they are presenting and why is that what you see there probably cool and so that's kind of my main focus um to for for that course and I would also like to have or to to give you some hands-on experience handson experience it means that you actually Implement systems on your own um that may be a bit challenging therefore we typically provide Frameworks here where let's say some of the boring functionality is already provided and that you can focus kind of on the key aspects of the the core method the core algorithm and because whenever you implement those algorithms yourself you typically see that there's often a potential mismatch between what's written in the textbook and what you really need to implement because there are some implementation details which turn out to be pretty important and um if you understand that you also typically get a deeper understanding of the algorithm itself what the problem of this algorithm does it suffer from numerical issues and all these things that's something you typically learn when actually building the system yourself and that's why we um at least for the graph based approaches and for the common filter based approaches really would love you to implement the stuff on your own also for the particle filter we will have some um Hands-On work so that you really Implement such systems yourself and um be able to to understand them and get a deeper understanding of that material so when there are any questions just interrupt me so more interactive this course is to be be the nicer so what do I expect from you in order to do that so you should have some basic math skills that includes some basic linear algebra so you should know how to manipulate a vector how to multiply a matrix with a vector and um you so you you should have at least some knowledge of linear algebra it doesn't mean that you're an expert in linear algebra and have attended whatever A onee Course in linear algebra that's not what I expect and what we are going to need but um you should be familiar with the concept of vectors different products how to multiply matrices um what's the Jacobian um so these are aspects that I expect you to know um in some of these situations let's say for the Jacobian I will kind of reintroduce them explain you quickly what it is but if you think you have let's say you miss some capabilities in the context of linear algebra it may be good to revisit them and the second thing we look into our probalistic concept so most of the slam techniques that we are going to discuss here are probalistic techniques what are probabalistic techniques probabalistic techniques or techniques which use probabilities and that typically means that we have an explicit representation of the uncertainty and that's one of the key advantages for those mapping techniques as all sensors are affected by noise whenever we have a sensor which observes some aspect of the world this data is typically noisy or and our algorithms which makes decision on this data also is not free of errors and therefore um these explicitly introducing the notation of uncertainty into approaches helps us to build more robust system and make better decisions and that's the reason why probabilistic Concepts um will be used in this course so things like what's the probability distribution um what is base rule so some of the very basic things um you should actually know and if this is not the case you should actually revisit them so take whatever there's an introduction lecture in the robot mapping course sorry not robot mapping course introduction to mobile robotics which is probably two or three hour course um chapter four no five five probably um where there's kind of you start from the axioms of probability Theory derve the basic rules that's what you need you won't need more but this is something I expect you to know you should have basic programming skills um this is just nothing you actually need in the lecture here to understand the lecture but that's something you need in order to be able to complete the homework assignments um these homework assignments um are free so you don't have to do them you can participate in the exam without ever having seen one of the exercise sheets or homework assignments that's an offer that we give to you that you can do the homework assignments um we're going to grade them actually um Ruma and fabit N which are the two guys sitting there in the back they will take care of the exercises so they will make the homework uh assignments that would actually correct your homework assignments and give you feedback on what's wrong what you should improve in your assignments and um I give this an offer to you because this is a specialized course I'm not going to force you to do uh these homework assignments um feel free to do them I encourage you to do them people who do them are typically better in the exam but it's absolutely free so there's no Force so no one forces you to actually do the homework assignments but as I said they highly recommended program programming will be done in octav which is a kind of free version of Matlab at least the basic functionality is there that what we need if you say I really hate that I want to program whatever in Java feel free to do so the corrections may be suboptimal in this uh case because um we have um or we designed this course to work with octav and the stuff the homework we give is typically easily done or not too complicated to be done with octav if you do it in other programming language you may need some external libraries or if you want to implement the whole math framework yourself that's challenging um so we encourage you to use octav but you can use anything else if you want to because it's kind of it's a it's it's your choice what you want to do how you want to learn that course as I said it may be useful or is useful to have attended the introduction to mobile robotics course but this is not an all requirement so whenever I use something from that course I will give you the pointer to the explicit um course material I will make a short repetition um on whatever this is a sensor model and sensor models are used for doing this this or that and um if you need more information about that please visit that course uh the course material but I try to make it a self-contained course so even if you have not attended introduction to mobile robotics you will be able to understand that course and perform very well so there's no we don't build up on a lot of material question or okay perfect um exam will be an oral exam so the number of students here is not too huge we typically do oral exams um one by one and then ask you questions about the material we have done here the most important thing for me is that people do not whatever I would say stupidly learn but they should understand the material of course it's helpful to have basic concepts at hand and learn so whenever there's whatever the basic equation of what a base filter is that's something you you need to know by heart in order to uh be able to to do the course well so some of the things actually it's helpful if you have learned them and you know them but for me the most important thing is that people can actually use their knowledge to solve new problems to transfer to new situations or when I ask you what happens if you have such a system and you actually change this component to a completely new component you've never seen before can actually make the transfer of um of of saying me what needs to be changed in this system in order to uh to build a working robot mapping system so it's much more important for me that you understood the material and therefore the homework assignments are mainly programming because I believe that through programming things you actually get a quite deep understanding of what you're doing and they are less theoretically theoretical homework assignments and um yeah that's kind of the style in which we are going to do that the exam will be in the examination period so the um in the in the next term typically will be one or two days um where all the exams will take place yes please in what language is it German or English so I can't speak anything else in German English you can or at least at a level where um I can actually grade what you understand what you say and and grade you so therefore you can choose if you want to do that in English or in German if you want to do it in German let me know beforehand because it depends on the um person who is joining the exam so the second person around I just need to make sure um this person speaks German as well for the protocol but that's the only constraint that we have so who of you actually have aend introduction to mobile robotics last term or any term two three four people so okay that's fine um so it actually means that I will spend probably a little bit more time on the concept that we actually use from introduction to mobile robotics um to give you kind of a good start so lecture takes place Mondays you all made it here 10 to 12 exercises Wednesdays 2 to 4 whereas this Wednesday we will actually have a short lecture in the beginning and a short exercise and for this exercise um it's useful if you bring your own notebook with octav installed um so that you can do actually some Hands-On work in the course um if you have Pro problems you can directly ask um Rino Fabrizio to to help and so that's pretty useful if you have an own notebook bring your own bring this notebook and make sure you have octav installed so you can actually within the course um work within the um the tutorial already work on that and that you are also able um to kind of follow the introduction to octav which actually will be given on Wednesday okay there's also a question an answer Forum which I just set up at Google Groups you'll find it under this URL this URL is also available on the course homepage where you can actually post questions and we are trying to answer these questions or if you see a question that someone else uh has posted feel feel feel free to reply to that as well what we don't want to have here is kind of solutions to exercises because actually everyone should be encouraged to do it himself so don't post Solutions but you have any questions in there um feel free to ask that question even it's a dump question so this allows us to actually first see what people haven't fully understood and um it may be more convenient to do for you to ask it via email um so that we can answer that and instead of coming to our office but again you're invited come to my office if you have any questions visit fabito or Rina um if you have any questions you want to know any details or something you haven't understood that's what we are here for so users make this an interactive course ask us questions and feel free to do that outside the course as well so visit us send us email or preferably through that through the Forum U because and other people will actually contribute from the answer as well the only thing if you post there please make sure you have a very precise question so if you have something uh me you can also say I haven't understood anything could you please repeat this and this and this concept that's also fine but um if you have a concrete question try to formulate that answer that question as concrete as possible that's something we experience over the last years so sometimes you have very very foggy questions where you don't the person will know A B C or D and it's kind of feel a little bored to answer all all four questions um so make sure your your question is actually pretty precise it actually helps us to answer and also to answer quickly um the course is um mainly done based on one book that is probabilistic robotics that some of you may have seen so at least the first part of um the course is taken from that book um and also all the notation we use in the course is in line with that book if you find that book in the library there should be at least 12 copies available here in the library up there um and so there should be enough material for that course pretty good textbook I I really recommend it it's it explained Concepts very well and so so I like I like it a lot and therefore large fraction of the course material is based down on that book the second part on in that course when it goes to more to graph based approaches um we rely not that much on that book anymore and but there will be other resources available so everything else in the book is available as PDF F on the website the course website so all papers all tutorials on certain aspects um are available there you can download them the only thing is which we couldn't put online is actually a PDF of the book so you can either buy it or it's available in the in the library and um one thing if you find something in the book which you think that it's really really weird this website probis robotics. org and there's narata list for that book so depending on the Edition that you have there may be some errors in there even the Third Edition which is a current one has still some errors and you find an detailed error list there so whenever you have something which you say I really don't understand that it must be wrong um before you spend too much time look up the website look up the AATA page and that actually um May simplify your life a lot again if you have any feedback about that course just talk to me talk to Rina talk to fabito send us email whatever you prefer and the more interactive this course gets and the more feedback we get from your side the easier we can change things so last year it was a quite interactive course when I taught it the first time um and I like to have that kind of um interactive style of you asking questions something you haven't understand interrupt me I will try to answer that or if you have if you want to have anything changed you say whatever you speak too fast or your style of teaching sucks let me know let me know what you would like to have changed I'll try to take that into account I obviously won't do it for every comment but most of the comments have some truth in there and I try to actually take that into account and to to improve what I'm doing here so whenever you have some some things that you dislike here let me know if you don't tell it to me nothing will change because I won't know it if you let me know what you dislike you I will try to change that are there any questions from your side
|
SLAM_Course_2013
|
SLAM_Course_09_Short_Kalman_Filter_WrapUp_201314_Cyrill_Stachniss.txt
|
okay then we are going to continue is kind of a short wrap-up of the first block of the lecture so the first block worse kind of the Kalman filter and all its friends and the second thing was their next step or the particle filters that's something we will start which is kind of next step is how to use particle filters for solving the simultaneous localization and mapping problem and then we look into graph based method hello ah last step okay so common filters we started with Kalman filters and then the common folk does not the Kalman filter itself is kind of the Kalman filter and all its friends so we looked into the Kalman filter we haven't really used it for slam we curse it have the issue that it assumes that everything is linear so it's not really in that well usable so that led us to the extended Kalman filter which linearizes use linearized models and is one way to deal with its nonlinearities then we looked into the unscented Kalman filter which use the unscented transform to get a better approximation of the linearized fractions and then we look into the extended information filter briefly and that did this parse extended information filter to a very much deeper extent and these are kind of the most important representatives of the techniques from the Kalman filter family that have been used for the slam algorithm so the Kalman filter just is a very very short wrap-up consists as all others of two steps the prediction step and the correction step and this was the very simple set up where we only have a linear dependency to go from the previous mean to the predicted mean and then update our predicted covariance matrix accordingly then we had computed the Kalman gain which was a trade-off between how certain am I about the predicted estimate of the robust pulse and how much certain M about the measurement it's kind of a weighting a weighted mean between the observation and the prediction and this leads them to and let's also write called correction step because it uses the observation to correct the prediction is kind of the standard Kalman filter and that's an algorithm you should know it's kind of one of the basic algorithm you should have really understood that algorithm if not it would be kind of a pity well then I may have failed to them because we spent them quite some time with it so I hope you all are aware of what the Kalman filter does the problem we have of the Kalman filter is that it is as linear models so whenever the world becomes nonlinear we are going to fail and then we said okay let's replace those linear functions by nonlinear functions the problem is in in order to still execute the Kalman filter framework or the external Kalman filter framework we need to have linear functions otherwise it simply won't work so the trick is to linearize those functions add the current linearization point at the last best estimate that we had and then approximate with the Taylor expansion and this then led us to the extended Kalman filter which is very similar to the Kalman filter except that it uses this non-linear function G and nonlinear function H in order to compute the motion update and the predicted observation in the measurement function to come up with kind of to replace the linear models of the Kalman filter so the Kalman filter is in the external Kalman filter is an extension of the standard Kalman filter and it's kind of it's call it a trick that is applied in order to deal with those nonlinear functions just linearizing them at the current best guess and this is an approach which is very frequently used for me the most frequently used slam implementation although they are not that many new coming up so today most people use other techniques but it's kind of traditionally the slime approach with which everything started and it works well as long as your nonlinearities are not too bad and unless your uncertainty is not a huge huge uncertainty is combined with nonlinearities that's something which may screw up your system or your filter if you want to use it for slam we have this is our typical state vector so we have the robots post and the position of the landmarks and this gives us the corresponding covariance matrix and then we can actually if we start an estimate the robot sees the first landmark this will give me the corresponding correlation matrix so the robots pose here is correlated with the position of the landmarks so you can actually see those small dots over here as we continue to write through the environment map the theme continue and continue we get this kind of dense covariance matrix correlation matrix just normalized variant of the other one so we can see this kind of checkerboard pattern which tells out that all the expositions are correlate and all the Y positions are correlated of the landmarks actually shown that in the limit all the landmarks get victory for the Ln mark estimates gets for a fully Kurland so if you continue driving around we have this full correlation so the matrix becomes dense this the slam algorithm the complexity of the flame algorithm itself is dominated by the number of landmarks that we have so it's quadratic in the number of landmarks and it's quadratic memory consumption this only holds here for the slam case because in the slam case again have the effect that I see only a limited number of features and I also have the case that I update only a small fraction of the state otherwise the operations are more costly so I only can do it in quadratic time and critic time because of the properties of slam but still even with the quadratic complexity if the EKF can become intractable if we look into a large-scale mapping environment simply big we have a lot of features this doesn't really scale well then we looked into the unscented Kalman filter which was one technique in order to improve the linearization of the KF so not just to take the current linearization point to compute the current best guess to as a linearization point to do our linear approximation but to use multiple points and this was the answer to transform these Sigma points take more than just the mean propagated through the non linear function and then reconstruct the Gaussian distribution based on that and use this for for the motion update as well as for the correction step that's something you probably a clearly have on your should have on your homework assignments to work on exactly that so exactly we use the unscented transform which then led to the answered common filter again just as a small summary so this is what the EKF does we have one linearization point it covers the linear function and propagates the motion update through this linearized function what the unscented transform does it computes these Sigma points also called Sigma points and propagates all Sigma points and then we construct a Gaussian based on these Sigma points if you compare the UKF where the EKF so they give us the same result for linear models so if everything is linear I don't lose anything in doing that it's a better approximation than the EKF for non-linear model so i gain something although it's reported in a lot of application that the the improvement is somewhat small so it doesn't make a traumatic difference you can use this to tune your algorithm but the effects not traumatic and another advantage is that you don't need to compute your COBE ins for the UKF so if you're too lazy to compute ecobee ins you may use the UKF although there may be other things on how to compute the square root how to do that which can lead to other whatever America may may lead to numerical problems it's somewhat slower slightly slower than the EKF but still in the same complexity class because the sampling process is simply a little bit more the constant overhead but same complexity class just a little more expensive to compute and then we the extended information filter which is the doing performing next or running an extended Kalman filter not in the moment representation using a mean and the covariance matrix but do that in the canonical form where we use the information matrix and the information vector in order to do that the the thing is that what was cheap in the external Kalman filter becomes expensive in the information extended information filter and the other way around and something which is still a little bit there are suboptimal that I still need to compute the mean typically for doing the motion update and doing the measurement update so I still need to compute the mean so the EKF and the ëif have in the end the same expressiveness so two steps one is more expensive the other one is cheaper depending on what I exactly do I may choose one or the other but much more often the EKF is used and they both have the same expressiveness in their operations yeah that's exactly what I already said before same complexity class the one is more efficient the other one in this case the other one the other case that's basically it but using the information form was served as one motivation to look into the sparse extended information filter which we just discussed today because if we look into the covariance matrix we see the information matrix is approximatively sparse so the key trick is to get rid of those direct links or large number of those direct links to get an approximate Aenon approximation and then solve this approximation we have this four steps the motion update the measurement update the recovery of the mean and the specification which we just discussed a few minutes ago so I'm not going to go into all the details here but the most important things is that we only maintain links between the robot and a small number of landmarks these active landmarks and we ask we only keep constant number of direct links between the robots pose and active landmarks when the robots moving those landmarks get obtained or have increased direct links or links of increased strength but since the number of active features is limited the number of links that are generated in this way is also limited in this case I maintain a sparse matrix and then we can exploit this versity information in all our steps so the size lamp is an approximation as I said before it's this the quality of the solution is worse but I can compute it much faster and less memory in order to maintain that and these are the big advantages of the sparse ascendant information filter for slam compared to EKF slam for example okay so the Kalman filter and all its families are different ways for representing and updating Gaussian distributions based on motion updates and based on measurement that I have and depending on what kind of models I have or the linear other nonlinear I can choose one or the other or what properties exactly the algorithm requires from the problem that I'm trying to solve but all filters have been presented so far in this course require our Summa Gaussian distribution so whatever it is will be represented by a Gaussian if it is a Gaussian or not filter doesn't care just as Gaussian that's it that is a limitation there is specially limitation a vu let's say say I'm either here or there I'm pretty sure I can be nowhere else but here are here are simply two possible locations of the robot which I can't distinguish because the environment let's say have a symmetric environment and to look exactly the same so I say I as I'm room a or room B the room looks identically I'm pretty sure I'm either an A or B I'm pretty sure I'm nowhere else but those two most of our it's no good way to represent it with the Gaussian distribution and that's one of the disadvantages of those systems here especially if there's a high uncertainty if they are ambiguities in data Association which then can lead to this multimodal distribution or bimodal distribution distributions and you can actually show that a certain number of situations doesn't happen that often but in most real data were datasets that you see which are recorded with the real robot you have a certain number of situations I'd say but to up to five percent where the distribution is non Gaussian still in most cases the gaussian approximation is not too bad but in a few cases it's dramatically wrong and then those filters will typically diverge and will not work in those data sets and what we are going into next week is particle filtering which is an alternative technique in order to to address state estimation with a recursive based filter so I don't want to go into the details today but I just want to give you a very very short look out and what you will experience next time so we are moving away from having a parameterised distribution model like our Gaussian where we say we have a mean and the covariance matrix not to represent it and we will look into what is called a nonparametric way for representing distributions and the way the particle filter does it it uses so with random samples the best thing to view a random sample is to view it as one possible state the system might be in so it's just one hypothesis they say I need to cover a certain an uncertainty that let's say the robot is either here or here or there or there you could say okay I have a lot of samples somewhere here a lot of samples somewhere here a lot of samples somewhere there and somewhere there and those samples represent possible states the system might be in and that is one way to represent a probability distribution without having to write it down in a parametric form just have samples of course the larger the uncertainty is that to cover the larger my number of samples that I need there's no free lunch but if I just want to maintain let's say a small not to large areas but my uncertainty or I have situations where especially the dimensionality of my problem is not too big it's a pretty attractive way of performing state estimation outside the Gaussian world and what we are going to do next week is to have a very very brief summary of particle filters how do they work what the key assumption about the key algorithm and see how this just an example how this works and localization so we know what the environment looks like and then we will do something which is called Monte Carlo localization but really just enmity have an hour just wanna give a very quick tour through Monte Carlo localization which is kind of a simplified form and you can nicely see the properties of the particle that algorithm and what we're then going to do is okay given that we know how localization works with a particle filter how can we actually do slam with the particle filter and there are a number of challenges that we need to address the key challenges key challenge is the high dimensionality of our estimation problem so the robots pose is low dimensional three dimensional that's kind of easy over six dimensional but if I have a million of leg parts that gives me easily whatever more than two dimensional state space and that's something or particle filters cannot handle well because you can see this you need to have enough samples to cover that a the meaningful areas or regions in our state space and that's in these situations if hundreds of thousands of dimensions they typically flow because you can't represent so many samples but there will be tricks on how we're going to address that and solve that issue so that we can use the particle filter and exploit the nice property that we can represent multimodal distributions with that again the key idea of the particle filter is I have a large number of possible hypotheses so called samples in which the system might be in and if you want to compute let's say what the probability that the robot is in a certain area you inspect this area and simply the number of samples in that area and the more sample so in this area the higher the likelihood of that area the probability of that area that's kind of the basic the basic of particle filtering and the nice thing is whenever I do a motion update you just need to update all the individual particles so you can have crazy nonlinear functions because you only need to propagate one single state and that's great so you can propagate all samples forward if there's some weird zigzag motion or whatever you do every particle can be propagated it will through this nonlinear function very very easily and you get your update you still need to see how can I increase the uncertainty that results from this motion but that's one very very attractive way to deal with non linear motion models and then you need to do a correction step very similar to the to the filters before when we take into account the observation and say okay how well does this this particle this state hypothesis actually represent the world how well is this in line what the world looks like and this then assigns something like an importance weight or fitness value if you if you want to be a little bit more imprecise to every particle and then in the end you do a kind of survival of the fittest where you eliminate the ones which perform badly and replicate those which performed well this happens in a mathematically sound way so not just like our we replicate this one this one there's a sound way for doing that but it can be viewed or understood in this way it may be easier to grasp if you say you have those samples you propagate those samples and you weigh them how well they performed and you kill some of the bad ones and replicate some of the good ones and then you can you believe in this way recursively update the belief so that you hope that all samples in the areas which tend toward zero probably will die out and all those which are close to the truth post will actually survive it's kind of a very very very informal description of the particle filter and but that's something we are going to experience next week in more detail and we're done with the first block of lecture thank you right for your attention and see you next week Thanks
|
SLAM_Course_2013
|
SLAM_Course_05_EKF_SLAM_201314_Cyrill_Stachniss.txt
|
um welcome to the course today on um simultaneous localization and mapping using an extended cman filter and so last week we actually introduced the calman filter an extended calman filter so the key concepts of these algorithms and today we would like to I would like to go more into the detail how to apply this concept to the simultaneous localization mapping problem so that we are able to estimate the location of the robot as well as a map of the environment in this case a map of landmarks um giving this framework so just to put that into context we wanted to estimate the position of the robot and the position of landmarks in the environment this is our map and um we need that map in order to perform navigation tasks and the key problem not the key problem but one of the challenges in this problem is that the process of estimating a map cannot be decoupled from the process of estimating the trajectory or the pose of the robot because both variables are uh depend on each other and so we have to solve both problems at the same time namely estimating the location of the land marks and the location of or the position of the robot um so we started with kind of the definition of the Slime problem what do we need what do we want we have a sequence of control commands that the robot executed these are these variables U time step one to time step T and these are either raw commands like go with with a certain velocity or these are odometry readings and we treat these odometry readings as control commands there kind of counting the revolution of the wheels and estimating where the vehicle went and the second thing we have are sensor observations this is typically a sensor which observes the environment such as a laser range finder a camera sonar sensor or some of those kinds of measurements and they typically provide me information about the environment that could be the um bearing information to a landmark the range information to a landmark so how far is a landmark away or it could also be both at the same time these are kind of typical sense observations that we get in this context and what we want to estimate from this data is first the map of the environment typically referred to as m in our case here today this map is a set of landmark locations and these Landmark locations are just things that the robot can identify with its laser range or with it sensor setup is a range scanner or camera images and we want to estimate the pose of those landmarks if we are navigating outside here in front of our building that may be trees that we see but that can can can be any other type of object which um populates the environment and that we want to use for localizing the robot and for estimating a map and the the next thing you want to have is you want to have the paths of the robot so the positions of the robot XY as well as the orientations for the 2D case um where was the robot at every point in time in most cases we actually look into online slam that means we only interested in estimating the current um position of the robot or current pose of the robot and we we kind of ignore the previous locations because they are depending on the application not necessarily interesting for us that really depends on the estimation technique and on the task at hand um in this course we will look into three different paradigms on how to address a slam problem the carbon filter family particle filters and graph based optimization approach and what we do here today is we start with the cman filter as I said in the last lecture we introduced the cman filter um as kind of an abstract concept for State estimation and today we're want to look into um the question how can we map that to the slam problem what needs to be taken into account here um just as a reminder the cman filter is a recursive based filter and the recursive based filter can be split up into two steps the first one is a prediction step and the second one is a correction step and what the prediction step does it it takes a current estimate of the about the state of the robot it executes or takes into account the commands that have been sent to the robot and estimates a new state a predicted state in which the system is in if we have landmarks in the environment and the pose of the robot as our state space and we want to estimate um how the state changes given that we executing command it should be kind of clear that at least for motion commands we sent to a robot this only changes the position of the robot just by going a meter forward that typically doesn't affect the position of the Larks so in a lot of set estimation settings the prediction step only affects a subset of the variables okay so this is a prediction step this is our previous belief and we estimate how does the belief change if we are in the state XT or over the distribution XT which was the belief at the previous point in time we execute the command UT with where do we end up this is our predicted belief always with this head here on top and the second step is then the correction step which takes into account the sensor observation that the robot takes perceived about the environment and it relates what the robot sees and what the robot is supposed to see and based on the of the based on the mismatch of these two observations the predicted observation and the obtained observation a correction is executed and an update of the state happens and this is um the correction step and this prediction and correction step was also something we have seen in the extended cment filter so we explicitly had our prediction step where we take into account the um commands or the controls that were issued to the robot and then we have the um the correction step which takes into account the sensor information what we are considering here today is EKF for online slam that means we are not interested in estimating the full trajectory of the robot we are only interested in estimating the current pose of the robot together with all Landmark locations so if you looked at the graphical model that we introduced in the I think first or second lecture we are not estimating the whole um sequence of robot positions we only interested in always estimating the current one because we want we do want to know where is the robot currently in the environment and what are the environment look like because these are typically the two quantities um based on which we can make most of the navigation decisions for some applic it may be interesting to also know about the past trajectories that the robot has been taken but in the what we are considering today this is not the case so is everything clear so far is there any questions up to this point okay perfect so what we then introduced last time was actually the extended common filter algorithm and just will very very briefly in 2 minutes revisit this algorithm to keep you everything up to date so what happens here in the first two lines line two and three this is the prediction step there we compute the predicted belief about the the state we want to estimate and this is this muar and sigma bar these are these two variables the mean and the covariance Matrix of our state and what happens here is we have our previous estimate about the state this me Mt minus one so the the kind of the output of the calman filter in the previous step and we apply the um the the control command UT and this gives us our new state in this function G is a nonlinear function so we moved from the calman filter to the extended cman filter by allowing for nonlinear models and we have our update in the um of the covariance Matrix which takes into account the Jacobian of this function the previous um estimate of the uh the previous uncertainty estimate the previous covariance Matrix plus the noise that is added to the process um through the motion command UT so that means if we have a certain coari Matrix which represents our uncertainty and we execute a motion command let's say move a meter forward so the robot moves a meter forward and this motion adds an additional uncertainty so the covariance grow so the uncertainty grows Co variance Matrix gets larger that's exactly what happens here in this slide this is a prediction step and if we want to implement the prediction step for specific application we first need to Define our um nonlinear function G which tells us how to move from a given State and a control command to our new state and we need to compute its Jacobian so it's the Matrix of partial derivatives that's what we need for the prediction step then we have the correction step which is down here the first thing we do is to compute the soal called calman gain K and the key idea of the calman gain was to to compute a weighted sum by saying how much certain is the robot about its predicted belief this is this guy over here and how certain is a robot about um its the sensor properties so what's the uncertainty associated with an observation and the more certain or the better the sensor is so the smaller the uncertainty in the sensor the more the robot will trust the sensor observation and the more will the weighted mean drag the current the the the corrected estimate towards what the robot has seen if we in contrast have a very very bad sensor it's very high noise and a quite accurate motion model the what may trust much more its um predicted belief and this is this this the common gain K computes exactly this weighted sum so this term over here was the uncertainty of my sensor observation so if I'm very very uncertain about what I'm going to see this is very very large we invert this this sum of two elements where one element is extremely large so this term will be extremely small so if we have an high uncertainty in our sense observations K will be very small it's kind of a small waiting factor and if you then look to the to the next line what happens the the new mean is the predicted mean plus the cman gain and the difference between what we observe and what we expect to observe so this kind of the difference between the obtained observation and the predicted observation and if the Kalman gain is close to zero the robot will simply stay with its predicted belief if the common gain is large it will lead to an update of the um of the new mean estimate which moves towards the sensor observation okay and then the next the step down here is just an update of the um Co variance Matrix taking into account again the common gain the Jacobian of the observation function and um the previous or the The cence Matrix of the predicted belief that was the common filter algorithm as we introduced it last time now I want to take that algorithm and say how do we need to what do we need to do with this algorithm in order to solve the slam problem so how does G look like how does H look like and if there anything else we actually need to take into account when we do that we first need to start what does our state space look like so just a question to you what does the state space look like so what's the dimensionality of our mean what goes in there any ideas what are we going what what do we want to estimate prision and orientation of our okay so x y and Theta will definitely be part of our state space State space what else position exactly the position of the L marks so it's kind of a potentially very large vector and the first three dimension describ describe the pose of the robot and the remaining Dimensions describe the XY locations of of our landmarks okay that's exactly what we do this is only for the two if the robot lives in a 2d plane if the robot lives in a 3D World we will have a six-dimensional uh Vector for representing the robot's pose because we have three angular rotations um or three rotations and the landmark positions are then quite likely to be three-dimensional as well but for the 2D case we have a three-dimensional robot pose and two dimensional Lark locations okay so that's the first threedimensional of the robots posst the fourth and fifth one are Landmark one and so on until we end up with our in landmarks so if you look into our um State vector and our Co variance Matrix they will look something like this so we have a 3 plus 2 N dimensional gaan distribution if n is the number of landmarks so we have here um the pose of the robot with with the uh corresponding um coari Matrix we have the uncertainty about landmarks and their correlations as well as here the correlations between the robot's pose and the location of landmarks and this is what we're trying to estimate so sometimes you find in books it's written in this way sometimes people group the landmarks and the robots POS together and you find kind of this kind of representation which is exactly the same except the landmarks are now M1 to MN and not the individual Dimensions split up or sometimes if you write it even more compa you find it written in this way um where all these these all guys are vectors and these are matrices the only thing you need to take here that sometimes the notation is a little bit sloppy so this guy is called X but it's it's really not only the uh the X dimension of the robot posos is XY Theta so you have to be a little bit flexible um with this X over here but it should be actually kind of clear from the context what I refer to in most papers or books we actually find it often written in that way as well because otherwise you have too many if you have the time index and the index for the robot and all these things so people tend to neglect that so don't get confused um if you if you read something like this then this is typically a threedimensional vector okay now we have to go take this representation and go through the individual steps of our calman filter we will first do it in kind of an abstract way just as as a using a small example and then we will do this uh more concretely deriving the individual um functions uh so how does our function G look like how does our function H look like and we'll really look into the details so that in the end you should be able to actually implement the extended common filter for the slam problem which will be the exercise for next uh in next next week ex okay so let's start what are the key elements of the common filter um cycle so the first thing we do is we do our state prediction we take the control and estimate the new position of the robot given the control the next thing we need to do we need to compute our predicted measurement this is EV evaluating our function AG at the predicted mean this tells us what are we expected to observe given the best estimate um that we have for a robot so given the the belief that the robot is we can compute an expected observation well we then need to take we take need to take our real measurement then we have to make a data Association by saying to which Landmark is what I'm currently seeing actually corresponding to and we need to compute the difference between the expected observation and the obtained observation and this leads then to the update step or the correction step so these three steps are kind of substeps which belong to the correction step or 2 to four 2 to five actually is a correction step and just split up differently because these individual steps will lead to um different effects on the filter so the first things we do with the robot was over here and we do our state prediction for the let's say the robot moves here and its uncertainty is illustrated by this red ellipse and then in this case the ellipse increased and that's the new state where in which the robot currently is so as I said before the um the motion command only changes the pose of the robot it does not change location of the landmarks in our case assuming that the robot doesn't modify the environment so it bumps into a landmark and moves The Landmark away that may be taken into account but in the basic setting the robot is assumed to change its own state but it's not supposed to it's not assumed that the robot can actually modify the environment with its commands so the only thing that changes in the state Vector is actually an update of the first three dimensions because the robot moves forward standard motion equations are used to compute where the system is likely to end up if you look to the co variance Matrix which entries are updated this actually holds for the uncertainty of the robot itself but also to the correlation between the robots POS and landmarks because while the robot moves the correlations between the robots pose and landmark pose actually changes so if we need to update this part of the state space what's the computational complexity that this operation will require any IDE so how many entries need to be updated given this operation why do you think it's n Square n here so we have n Lark so this is an N byn Matrix that's absolutely right but yes so I mean so we estimate this block these blocks over here these blocks stay unchanged so should be n and six times the number of land here so it's linear in the number of landmark operations because all these guys are 2 x two blocks so we have 2 * 4 is 8 so these are 2 x two blocks oh yeah they are and then we have eight times number of landmarks and this is a 3X3 block which needs to be updated so the prediction step is linear in the number of landmarks so whenever we update whenever we execute a new motion command I need to do a number of operations that is linear in the um in the number of larks that I have that's kind of efficient of course it would be better if it would be just constant time operation because the dimensionality of the uh state of the robot is constant but given to the correlations between the robot's pose and landmark's pose we need to update linear number of blocks in our covariance Matrix and this leads to linear complexity of this operation okay so the next thing we do is the robot asss to be here this is the L Mark which is already stored in its map so what it does it needs to make a compute the predicted observation so it takes the current state the best estimate it has and says given that the robot is here what kind of observation should I obtain and the observation says hey you should obtain this Landmark over here so we can compute a predicted measurement which looks like this so This dashed line and this um ciance Matrix for the predicted observation that's what the robot assumes to measure given the best of its current knowledge okay okay so the next step we do is we need to consider the real measurement that we obtained and in this case the real measurement looks like this so it's a little bit shifted to the left hand side this is this dotted line over here this is what the robot actually perceived in reality what the robot now needs to do it needs to make a data Association by saying to which Landmark does this observation actually belong to so far we had only one Landmark so that's trivial and it needs to compute the difference between those landmarks so it's exactly this step over here this was the predicted observation this was the obtained observation and this line here is the difference between both of them and based on this difference we can then update our mean vector and it leads to an update VI the common gain of the covariance Matrix in this case the overall Matrix changes and possibly all um estimates of the robots posters and the landmarks change so what's the complexity of this operation yeah so it's quadratic in the number of landmarks so a sensor update is a really expensive operation if my state space State space is large and we even had something which I haven't shown in here but which be brief or very shortly discussed last week um the filter has this when Computing the common gain it needs to invert a matrix and this Matrix has a dimensionality of my observations so the the state space of the um of the the observation the dimensionality of the observation vector and so it's actually cubic complexity in uh in this dimensionality but typically the robot Observer has only a very local field of view so it observes whatever whatever is in with its 100 m range and and um therefore the complexity of the um that results from the measurement from taking into account the dimensionality of the measurement Vector is small compared to the number of landmarks as let's say a large scale mapping problem in reality has if it would have a sensor that it can observe the whole state space can observe all landmarks EX at the same time and also the correlations between those landmarks that we kind of need to treat the observation at the observing every single Market at the same time then this operation will be more costly due to this inversion but if we can neglect this inversion so absolutely right this is a quadratic operation for the worst case it's quadratic what happens here okay this was kind of the abstract example now let's get more concrete and let's realize an implementation of um the extended common filter for addressing the slam problem with all its details so we assume we have a robot that moves in the 2D plane so as x y and Theta describes the pose of the robot and we have Point landmarks that we can observe so we have an XY position for every landmark and we assume to have a velocity based motion model so we assume we don't have odometry we take pure velocity commands translational velocity rotational velocity which we sent to the robots um Motors and we have a range bearing sensor something like a laser range finder so it tells us the bearing information and the distance to The Landmark that we can in some way extract from our laser range scanner or from our sensor information and we also assume we have known data associations so whenever we see a landmark we know to which Landmark this corresponds in our map um and we assume to know also the number of landmarks which are in the environment so n is known beforehand okay how do we initialize our problem our slam problem given this information how would you initialize your um mean vector and your covariance Matrix yeah exactly the position and your be um that's not perfectly true so if we perfectly know the state cence Matrix should be a zero Matrix course there's no uncertainty perfectly know it's kind of direct distribution in the state of the robot um you said if we know the position of the robot so the nice thing about a mapping problem typically is that it's a relative problem so the robot observes has relative observations to landmarks so we can actually Define our own reference frame and the easiest way to do that is whenever we switch on the robot we say that's 0 0 that's the kind of volt reference frame for the robot and we just start from that so the mean the first three dimension of the mean Vector would be 0 0 0 and the 3X3 coari Matrix describing the uncertainty about the robot State should be a zero Matrix absolutely right what about the landmarks so we don't know anything about the landmarks yeah yeah we assume we have already so we need to build up our Matrix and we say we know we have n landmarks so we need to somehow uh want to add them to our state SP State Vector directly anit position near infinite unty that's exactly true so we can choose any position to take 0 0000 0 as well and then we simply add an infinite um uh uncertainty on this diagonal so we are we assume no they're not correlated in the beginning because we don't have any any information so these blocks these blocks here are zero um but there's an infinite uncertainty on the diagonal so this would be one possible initialization for the slam problem yes please so we have to choose how many lands there are before we start the AL is that liation um yes and no so the thing is that in reality you would do that differently you would kind of grow your Matrix on the Fly and stop a small Matrix the problem is that then writing this down in an algorithm is a little bit more difficult and it's already not trivial to present that here on the slides without doing a lot of derivations therefore I kind of left that out but in practice it's absolutely right you would start with just the 3 by3 Co variance Matrix and whenever you see a landmark you would grow your state space that's the way you would typically do that but kind of just in writing down the formul it gets a little bit more tricky um and the other thing is is this really limitation from what you can compute it is not really limitation because you because you could simply send add to very very large number like an upper bound and so there simply will in the end be landmarks which still have the initial state of an infinite um uncertainty and you could simply say okay these are simply landmarks I have not seen um of course this adds computational complexity and maybe some numerical issues that's also true which I will discuss later on because these infinite values here lead to some issues um but in general if you don't care about um numerical issues and you don't um care about complexity or time complexity space complexity um it giv should give you the same solution any further questions okay so that's what our initial estimate looks like now simply let's simply go through the individual steps of the Caron filter algorithm and um kind of expand them in detail how all this looks like so we start with the the first the line number two here and we say okay our new mean estimate is some function G which takes as input the the old um mean and the control command so we are now in the step kind of tals 1 so this would be um T equals z so that's exactly our Vector full of zeros which goes in there and the First odometry Command that we put in and we want to estimate the new mean in here we said we want to do the velocity based motion model so we talked about velocity based motion model in the second lecture that's one way to describe that so the new pose X Prime y Prime Theta Prime is the old pose plus this expression over here and the Assumption was here that um we have we sent the velocity command which is a translation velocity and rotation velocity to the robot and it executes these velocities for a short amount of time until we send a new motion command so during this uh motion though would actually moves on a circular Arc if the rotation velocity is not zero so this is the case for um the rotation velocity Omega is une equal to zero and then this all this thing is simply our nonlinear function G only for the third three dimensions take into account the odometry command UT which is the translation velocity and the rotation velocity and the Old State XY Theta so why did I put these in indices XY Theta here to our function G if we go back to our algorithm the input should be the full State vector and the output should also be the full State Vector this guy here takes into account only the first three dimensions therefore it has this index to distinguish that so if if we if we did this we defined this function how do we M that into this 2N plus threedimensional space course we we have our 3x3 our operation which the which updates a 3X3 Vector it's exactly what happens here but now we have to do this in the high dimensional space and we can simply take a huge Matrix which mainly consists of zeros and a few and a smaller 3x3 identity Matrix to map this update step of the state space so that it only effects the first three dimensions and the all the other dimensions are zero this can be done quite easily we just Define this Matrix fxt um so this has is um has here three uh 2 N plus three dimensions and here three dimensions this is the identity Matrix and these are all zeros so this the two n Columns of zeros if we multiply this Matrix transposed with this Matrix we will we we obtain a a mat u a vector where the first three dimensions are exactly these dimensions and all others are zero and then this guy here is our nonlinear function G so what this function G does it takes the current state space updates the first three dimensions and leave all the other dimensions untouched okay and this is exactly what the function G should do it's a nonlinear update step um of the current pose of the robot given an odometry command only the pose is updated of the robot The Landmark locations are not updated okay okay so the first line is done perfect completed let's look to the next one we have this so we know this guy we know this guy this is just the kind of the uncertainty of our motion model which we assume to be given so we need to compute the Jacobian of this Matrix okay so we need to um find compute the Jacobian of our small fun function small G and the thing is this update only affects what we said before the position of the robot so the function G does not update um the um the the position of the landmarks so if we go here to our function this is XY Theta Landmark 1 Landmark 2 Landmark 3 Landmark 4 if we derive this Vector according through the individual Larks we will just get um a one for the corresponding Dimension and zero anywhere else so this will lead to an identity Matrix in this lower Matrix here because simply we don't change the um the position of the landmarks in the update step so there completely untouched the only thing that changes is this block over here which is a 3X3 Matrix which tells me how does um the so I need to derive the nonlinear function which um Maps my odometry information into a state update this was exactly our odometry motion model so just looking now to this 3x3 Matrix um over here which looks like this so it's a partial derivative of this threedimensional function with respect to x y and Theta so can derive this Vector building the partial derivatives and this this the same explanation before gives me an identity Matrix course it's just a simple linear function with XY the identity so if I derive that I will have this identity Matrix over here and then I need to derive this block look over here and if I derive these equations here with respect to X and Y you actually see that there's no dependency of X and Y in there so the um the two the first two columns of this block part of the Jacobian will be zeros because there's no dependency in X and no dependency in y in this update equation right okay so this leads to these two blocks of zeros here but there is the heading Theta in here and if you derive this with respect to Theta we'll end up with this equation if we derive this one with respect to Theta we end up with this equation and there's we dve this one with respect to Theta it gives us zero again so this is my Jacobian my function G so if I write it down it looks like this I have the identity so the identity plus this block so have an identity Matrix here along this line and though the only two elements which are not the identity Matrix are these two blocks over here okay so now I have computed G um XT and if I put that together so I have this so I kind of have an identity Matrix all through this Matrix and then two elements which are non zero this simply takes into account linearizes the dependency of the S and cosine functions of the heading therefore these blocks these are nonlinear functions and therefore these ele these two elements are non zero um if they would be linear functions this should just be an identity Matrix okay so now we we have we know G we know the previous uncertainty we know R so we can actually expand that and write that down so this is GTX so this is what's the Matrix just computed identity Matrix here 0 0 times the previous estimate at Time Zero um time this Matrix transpose so this m here this is the transpose only this block needs to be transposed because it's the identity identity transposed is the identity and plus RT then we end up with this equation and in this equation you can perfectly see which parts of the Matrix are updated what I've shown you in the beginning where we had an update of the robots the entries of the covariance Matrix that represent the robot's pose and the correlations between the robot's pose and the landmarks so this first element needs to be updated so this the original uncertainty in XX needs to have has this Matrix multiplications here so this is an update in here and these two blocks also get updated so GT um times the um correlations between the the coar entries relating the robots poles and the L marks and this block here and this is a really large Block in reality in Slam this is not touched so if you do this operations if you do that in implementation you wouldn't do it exactly in this way because then you would add a high computational complexity you would update these Blocks individually in your Matrix because the large block is untouched right okay just this kind of an implementation note okay so done prediction step completed we know how to do it we are all happy now let's summarize the prediction step of the common filter in an algorithm what do we do we first Define our Matrix which Maps us from the low dimensional space to the high dimensional space it was FX just this is Identity the rest all zeros and then I can write the update of the mean as the the the predicted mean is the old mean plus this function f times um the um this part of the nonlinear function which updates my um my state given the translation velocity and rotation velocity that's exactly what we had before and then we need to compute our Jacobian G which is the identity plus this Matrix f plus uh times um this exactly this block um of the elements which are changed or which are nonzero in this um uh in this in the derivative of the nonlinear function and then we can update our state Vector the only thing which I kind of haven't explained before is to compute RT RT is again the co the change in the covariance for the whole state space but if the robot moves it only changes the uncertainty in the 3X3 blocks relating the pose and doesn't change the uncertainty of the landmarks so we can take this 3x3 Matrix and again with this function map it to this High dimensional space where only the first 3x3 block contains Matrix R and the rest is all zero okay so that's the full implementation of the prediction step which you can write down in octav mlab or whatever you want exactly in this way so we just need to follow exactly this algorithm we can apply all the steps we are done like to make a short break here and ask you is there anything which is unclear about the prediction step then let me know now anything else I should explain again which wasn't clear he would like to know everything fine who thinks who can now sit down and implement this algorithm okay at least some of you that's good um this basically as I said the on the exercise sheet which we'll give out next week that will be the um the exercise Implement a full EKF based slam system so really the individual steps which happened here just to summarize them is writing down your state update function G which is just the nonlinear function which describes how do we end up in the next state given we know the current state and our motion command the first thing we did the next thing we did is we just um computed the first derivative of this function cian so these are partial derivatives of the individual um Vector valued functions with respect to the individual variables it's kind of it's a generalization of the first derivative you all know from 1D to high dimensional space and then we just followed exactly these update equations and this weird function f or you may found weird here just is just used here because we are most of the operations are done in a low dimensional space we need to map it to our high dimensional space but there kind of it's just kind of taking this Matrix as a 3X3 blocks and filling everything else with zero so there's no black magic behind this function f yes please yeah it's kind of that is true so if you want if you implement that La octave you can do that much smarter then because you can individual you can explicitly modify blocks of matrices but there's kind of if you if you want to write if you want to use exactly the equations which are in the algorithms you have to write WR in this way to mapping this to high dimensional space so why this is in here this function this Matrix f is used in here to show you the direct correspondences between the raw common filter equations and what we do here it means exactly the same thing with the update down here in reality you wouldn't do it in this way you would update those Blocks individually by just modifying a subblock of this Matrix because this is more efficient but um kind of in order to see the correspondence between um what we do and the original equations of the Calon filter I found quite helpful to introduce this function so don't get don't get scared of this function f it's really just to take into account the different dimensions of our state space any further questions about the prediction step okay so then let's move to the update step over here the first thing we need to do which is kind of the problem specific thing is the the um observation function H and then it's Jacobian um H over here so we need to compute these elements and we have to make a few clarify a few things before how do our observations look like how is the data assciation given so we assume as I said before we have known data Association so for every Landmark we observe we know to which Landmark does this one correspond in our map so we see landmark we say this L this this feature I see in my observations correspond to Lark number whatever J and we do that with this correspondence variable so the eyes meas measurement taken at time T corresponds to The Landmark J so I say what is the landmark that I see um at time T let's say beam ey of the laser scan says oh that's J so that's kind of that's what we assumed to be given how we obtain that is a completely different problem and it's actually a non-trivial problem um because there's a large number of possible data associations that you can do and the the data assciation tree grows very very quickly if you want to track all your data associations that's the reason why most people just do So-Cal nearest neighbor data Association they just say what do I see what's the best fit in my map if this fit is very close I accept the fit otherwise I just say that's a new Landmark it's kind of the standard strategy that say 80% of all slam systems do just nearest NE by data Association so we assume here this problem is solved because it's kind of adds another um complexity to our problem and we say Okay data assciation is given that's what we assume here um so the first thing we need to do is initialize the landmark if the landmark has never been observed um you need to compute the expected observation need to know what we should measure we need to compute the Jacobian of H of our measurement function and we need to proceed with Computing the Calon gain and then the update Step at in the algorithm so these are next things we want to do let's first look into how does an observation look like so we said we have a range bearing observation so an observation that is a distance to the landmark and the the the bearing to The Landmark so if I'm here and my landmarks are tripods with cameras this guy over here I would say okay this camera there is 3 m away from me and I see that at an orientation of whatever minus 20° that's what this block look like so we for every Landmark that we observe we get this Tuple of a distance information and an orientation in which we see the landmark relative to The Heading of the robot okay how can we now compute the location of the landmark that's actually quite easy we say the the the positions where we observe the landmark where we predict the landmark to be is simply the position of the robot and then the distance in which I see it times cosine in which uh in the direction of the the the heading plus the heading of the robot so if the robot is at whatever 90° if this is heading zero this is 90° um it's the X would be um cosine of 90 de plus the heading so let's say 3° in which I see the tripod over here and then times 3 m and in this way I can compute the location of the landmark given this single observation and given the current estimate where the robot is okay so given this this tle you all now can compute the predicted location of the landmark so the location of the landmark given the measurement was correct and given that the robot was at the right position okay okay now let's look into the expected observation the expected observation was given we know the state of the world we know where the landmarks are and where the robot is we need to compute a predicted observation and then what we what the common filter does it compares the predicted observations what I'm should observe given my current estimate and between what I observe to compute an update on my state so we need to compute the expected observation so two variables Delta and Q which I use here so Delta X Delta Y is just the difference in X between the robot's pose and the landmark so it's just whatever if this is my xaxis Y axis I'm standing at 0 0 0 and I observe the landmark over there it's just the difference than in my X location the First Dimension and the difference in y is the second dimension so this Delta is just the difference in the X and Y position uh in the difference between the so it's it difference between the position of the robot and the position of the landmark in X and Y so if I compute Delta transpose Delta I exactly obtain the ukian the squared ukian distance Square distance between the robot and land mark is that clear so if I take this elements as dot product with the same elements it's these guys squared plus these guys squared so see Square distance okay so as a result of that the um predicted observation that had is simply the square root of Q This was kind of first what the range the distance between the robot's pose and the L Mark so it's simply square of Q and the orientation is simply the uh the Aon function take into account the difference in y and the difference in X so it's kind of the um what's the angle between the coordinate system where the robot currently is to the landmark and then I have to substract The Heading of the robot because if this is the orientation zero in my coordinate system that will be kind of whatever um ad0 de or something like this and if but if the robot is already looking in this direction I have to substract this let's say 45° so I end up with 35° in this case so I obtain a distance and an orientation in which I should observe my Landmark it's exactly what this is and this is exactly my function H given the current predicted pose of the robot so this function here is this magic measurement function which Maps the state to an predicted observation so actually not that comp licated okay so we have defined our function H that's great now we have to compute the Jacobian so this is just what everything which was written on the slide before and now we need to compute the Jacobian which is the partial derivative of this function with respect to the individual dimensions of our state Vector now the state Vector here is again 3 + 2 N 3 + 2 N Dimensions so this will turn into a very very huge Matrix where basically everything is zero except the pose of the robot the elements related to the pose of the robot and the elements related to the position of that Landmark therefore I called this age low over here which just refers to the non-zero Dimension so this low dimensional space and This only affects um are the elements which where where the derivative is versus towards x y and Theta so the position of the robot and towards the location of the landmark that I'm taking into account because everything else I know is zero okay so just keep that in mind so so this guy here will be a 2 by five dimensional Vector two because the function H which is this function over here has two dimensions and I derive it with respect to five different parameters so I take for the my Matrix here the first element the first line is always the first element of this function towards derived towards X towards y Theta and the positions of the landmarks and down here it's a second line of this Vector over here derived for X Y Theta and so on let's look just as an example into this first block over here the others you will experience in the exercise sheet which are lying over here um see how do we compute this guy over here again that's actually not too complicated because it's just the application of the chain rule for computing a derivative so we need to derive square root of Q with respect to X so Q Square Ro of Q is this guy and it's just the um the individual elements the the the scalar product of that so if you have um just as a reminder in case it has been long time ago if you have the square root of a function whatever square root of x and you want to derive this with respect to X you can write this as D to DX of x to the power of half and so this half * 1 / X that's the first derivative just as a short reminder this is what we will use now so this is exactly the first block over here .5 * 1 /un of Q and then this is kind of the outer part of the first derivative and then we have to look into the inner part of the first derivative which is um if you write down something like what we have is a - x^ 2 + b - y^ 2 so this a are just two constant values you derive this with respect to X this term is zero and this will turn into 2 * a - x * -1 for again the inner first derivative so we have 2 * a - x * -1 and this this is 2 * a - x X is just the first dimension of this guy so it's Delta x * -1 so it's just twice applying the chain rule for computing the first derivative and then you can just rewrite this also in this form it's just multiplying by so this two can be eliminated here a half time 2 and then we have 1 divided by q and then multiply with Q here gives minus otk of Q DX and you can do this for all the individual elements this Matrix and then you will end up with the Jacobian which looks like this so we computed this now for this block over here all the others are exactly the same elements with derived with respect to Y you can see the Symmetry in here um with respect to Theta course there's no orientation coming up here so it's zero and with respect to the landmark positions that's just what we uh obtain just by deriving this function with respect to X y Theta and the position of the landmarks and we can do the same block down here but that's something you will do in the exercise so we now have our computer Jacobian in this only in the dimensions which are non zero so just kind of only the dimensions that matter here which are non zero now we need to kind of map that back to the high dimensional space that we again do with one of those mapping functions F which is now slightly different as it was before so it has the identity here in the 3x three block so it Maps the first three dimensions to our um to this higher to the threedimensional space and then it takes the observed Landmark which is just kind of the I the the J's Landmark over here has also one one in here so these elements are mapped into the these blocks over here and all the rest is zero so these are all the Larks one to J minus one this Lark J we observ so ones here and then all the other landmarks again have zeros in here and then we can again use our Jacobian AG that we computed times this Matrix and this will simply map these low dimensional Matrix into our high dimensional space which is the complete three plus 2 N dimensional space and that's all the magic for computing H so we now have our predicted um Co variance that was exactly the guy we computed already H we just computed it h again again again and plus Q which is just a matrix which tells me how certain my sensor is so something which depends on the specification of my sensor nothing I'm currently taking into account here so perfect done I can complete the common gain once I can compute the common gain I can exactly compute this step over here because I know H we just I just told you how to compute it um we have our obtained observation the common gain so everything is straightforward so everything can just apply the basic formul and we're done again we can now do the logarithmic form the correction step so we Define our Matrix Q which is the uncertainty of the sensor so we assume we have an uncertainty in the range reading and uncertainty in um the orientation and then for all observed features that we observe at one point in time we say Okay J is exactly so J is the index of the feature we we have observed if we have never seen a landmark it's kind of one special case we need to initialize this Landmark otherwise Computing the predicted observation is completely flawed again we have this uncertainty that the uncertainty is infinite so the effect of this predicted observation um will be basically not taken into account um but the problem that I have in here is when I compute the linearization this this AG the derivative um you may remember last week I've shown that the bigger the uncertainty of the Matrix you fit through the through the linear approximation of your function the bigger the mistake is and if an infinite Co variance it's kind of something suboptimal therefore you treat the initial the initialization typically differently and you just initialize the Lark with the first observation that's exactly what happens here so if the landmark has never been seen I initialize a landmark based on the position of the robot and my um forward equations for my uh for my observation so mapping the observation into the state space and then I continue just as we said before we compute this Delta we compute Q we compute the predicted observation and then we have this big block this F Matrix we compute our toobian AG exactly in the way we did that use this F Matrix to map it to the high dimensional space and then we just out of the box apply our um our the the common filter update equations and that's it so if I observed a landmark for the first time I said I I put I initialize the Lark with my with my physical observation with the physical observation that I got as a result the difference between the predicted observation the observation obtain reality is is zero because I initialized it that way and therefore whatever the calman gain is I stick with my predicted belief it makes sense so if I have a predicted belief I um observe the The Landmark for first time this Landmark observation if I don't have any information about the landmark Mark doesn't help me to improve my post estimate that exactly what happens so if I initialize the landmark with the predicted observation there is no update it's kind of the kind of the the trick that is done in here because then this value is zero and then I stick with my old predicted belief there's no update in the mean estimate if the predicted observation is exactly the obtained observation okay so that's done we are done a few notes if you want to implement that um so if you perform the measurement update in a single step so this is kind of here we do this operation so here we had a for Loop so for all L for all observed features we carry out this Loop over here you can also do this in just one shot just take all the observations at the same point and compute one H Matrix which has then more elements which are non zero you can do that in exactly the same way um and then you can do this everything with just one update step you don't have to iterate over the landmarks it has some advantages and some disadvantages um the disadvantage is that your um the dimensionality changes of the for for inverting this Matrix that can be suboptimal and the other thing which changes is you can additionally take into account all the correlations between those L marks in the same observation if the observation relates your L marks in a certain way but it's kind of just as a side note the other important thing is you should always normalize your angular components if you're dealing with angles and your angles accumulate be be aware of the wraparound especially if you compare the expected observation and the obtained observation there has a wraparound along the angle so and if you're around Pi the wraparound can actually say oh this observation is whatever 359° away but in reality it's just one degree because you have the wrap around so make sure you always normalize your angles otherwise you get really really weird effects due to this nonlinearity or to this um just wrap around in the angular component and as discussed before as was raised before if you use something like octave or um mlab you typically don't need to create your F Matrix you just change the corresponding blocks in Matrix in place and you don't and this makes it much more efficient um and also easier to implement so that's basically it so that's kind of EKF slam from the basics to a running algorithm the only thing you assumed here is you have no data associations and that's it here we go that's how the simplest version of EKF based land Works yes please a little question for the first time introduce it into my map with positions I think I know um so I introduce them with zero uncertainty or no no I I so the um so after the update step you have the uncertainty of the of the sense observation Q um plus the um uncertainty of the robots posst but this comes automatically out of these equations um which the update comes into the game here in here in this update so the thing is really that um you the thing was that initially we said it has an infinite uncertainty if an infinite uncertainty and you take one observation all you know is the information from this single observation so after that it will have the the uncertainty the robot had when observing The Landmark plus the uncertainty of the sensor that's the question you that um it's computational not um um to do to use those infinite values so I thought what Val to use um so the kind of the that's that's true so it's a combination of two things the first thing is your linearization point but if you initialize it already at the best possible position yes then you have the good linearization point yes and then only the uncertainty of the robot itself matters so the problem is if you have it depends how far you are away from your so the thing was the the more nonlinear function is and the further you away from the linearization point um the worth the approximation that you have and if you initially The Landmark had the z00 Z position and if you put in here the right position that's kind of one way to get rid of this effect okay okay anything else so far okay that's perfect so then I would like to continue to tell you a little bit about the properties of EKF slam and some things which may not be may not be directly obvious the first thing I would like to talk about is Loop closing what is loop closing it's something which is pretty important thing in Slam Loop closing means that after let's say a long traversal through unknown parts of the environment the robot revisits a known area and recognizes this so the robot comes back to a place where has been before and say here I see those features which are exactly those features I have seen one hour ago and this um this is kind of a data Association step under very high uncertainty and one has to be careful with ambiguities in the environment or symmetric environments so if you for example would drive through this floor over here uh this room over here and would observe this room the may say ah perfect I perfectly recognize this room at the room have been an hour ago but it may be that the robot has been in the neighboring room which looks exactly the same or in the room downstairs which looks exactly the same so Loop closing always has the issue that there maybe the robot revisits has Revisited it the previous place where has been or the environment is symmetric so it's actually a non-trivial decision to say the robot closed the loop or not but in our implementation here we assume we have known data Association and this also handles the loop closing for us but in practice that's a non-trivial problem and one thing to observe is that typically if you close a loop the uncertainties collapse so they get dramatically smaller in an update step because the reason for this is that um typically when the robot drive through the environment let's say takes a very very long tour through the environment so it's uncertainly grows and grows and grows and grows over time due to the motion uncertainty when it revisits a place at this place the motion the uncertainty was typically much smaller because it's long time ago in history and then um through the through the correlations between the previous pose and the the landmarks down here which then have Al also been accurately estimated and the the loop claing this uncertainty reduction can be propagated kind of through the other side of the loop as well you can see this maybe in in a small example so theot was starting over here just going down down down driving along this path and these crosses here are landmarks and the um uh ellipses here are the uncertainty the 2D uncertainty of that landmark and this is before the loop closer the road started here was moving down here here going up here is now here and the next step closes the loop so then it re observes this Landmark this Landmark is more or less perfectly known and so this reduction uncertainty will be propagated through the loop in this way and as a result you will see that the landmarks which are down here which are kind of furthest away from the loop closing point would have the highest uncertainty but the uncertainty here is dramatically reduced so if you update that steps it would actually look like this you can see here that here the uncertainty all shrunk now we have the largest uncertainties down here because just because we are knowing those L marks position that well and the robot's pose and the landmark locations are correlated and if the robot then observes the landmarks this um leads to an reduction of uncertainty um along that Loop so whatever you call a loop this is a great chance to reduce your uncertainty and the uncertainty reduces in this way through the uh correlations between the robot's pose and the landmarks okay so um as I said before the loop closing reduces the uncertainty in the robots post as well as in the landmark locations and um the main thing one needs to take care of is once one without a wrong Loop closure that means a wrong data sociation that typically leads to um filter Divergence because the estimate will be dramatically wrong the mean estimate is likely to be wrong the uncertainty estimate is like you to be wrong because one simply kind of deformed the environment because the robot thinks it is a certain place but in reality it's somewhere else so adding a wrong Loop closure or accepting a wrong wrong Loop closure is something very critical especially in these kinds of Slam algorithms the are algorithms which we will learn around Christmas or after Christmas which are more robust to this problem of having wrong data sitations but here that's kind of that kind of kills your filter so accepting a wrong Loop closure is something that you really really want to avoid and um kind of the the fact that Loop clausing reduces uncertainty sometimes actively used in for example exploration approaches or active approaches where the system tries tries to explicitly find Loop closures to reduce the uncertainty in its belief but as we address it here at the moment we just say someone steers the robot just get the incoming data and just make our decisions is this a loop closure yes or no and then update the filter so Loop closing is great we should try to find Loop closures dramatically helps us to reduce uncertainty but we need to be sure that this is the right Loop closure it's kind of the take-home message from that the next thing is that um in the limit so if the robot drives over and over through this environment all the land marks will be fully correlated so you have all the correlations the robot drive through the environment and will add new correlations between the robot's POS and the landmarks we see it here this is the position of the robot these are all the Larks and the kind of this is a graph showing the correlations so you see some landmarks are stronger correlated than others but in the limit all of those landmarks become correlated you can actually see that in an example um so this is this is a map so these are Landmark locations over here the blue dots and this is the uncertainty Associated to the landmark that's the robot and this is the initial correlation Matrix um and so you have kind of these are all the unobserved plant marks all the rest is zero so this kind of the the the darker the value the higher the in the covariance matrix it's just normalized over here the robot drives around Maps the environment you can see there are a couple of landmarks which have not been observed this corresponds to this wide area over here and the others have been observed you kind of see start seeing certain patterns here in this correlation Matrix and if you continue um all the landmarks get correlated can see that here but you see this kind of a little bit weird checkerboard pattern what does this checkerboard pattern mean any IDE years yeah yeah tells us that there's certainty between the correlation of exposes of the robot and of land but and Y for example yes so there's a strong correlation between the EXP positions of the landmarks and between the Y positions of the landmark but not between X and Y it's kind of if I fix X if I fix the X location of one Landmark so if someone tells me hey this Landmark here is at X location xal 10 m i perfectly know where all the the X locations of all other landmarks but it doesn't tell me anything about the Y locations of the landmarks that's exactly the case and one important thing is that um it it's quite known since since quite a while actually since 97 that the correlations between the robots p and the landmarks are important and cannot be ignored if you ignore them you can show that quite quickly um you get a two optimate IC estimate about the uncertainties of your filter so you'll be overly confident about what you what you know about the world so any approach which ignores the correlations between the robot's pose and the landmark locations doesn't handle them properly is very likely to fail the next thing is that you can see that the uncertainty of the landmark locations decreases over time the more measurements you get so it has the maximum value that it has is um at its initialization and then you start observing The Landmark observing observing observing so you get more certain so the uncertainty decreases monotonically this is just an example here for a couple of landmarks um so this is when you see them the first time so this is kind of the initialization of the landmark and you see through additional observations the the all the uncertainties decrease over here so it's just they plotted the uncertainty over time and this just just an example so the maximum uncertainty a landmark has a landmark in the beginning when you initialize it and then the more the more the more often you measure the landmark the more certain you get about that landmark and um next important relevant thing is that in the limit so if you continue driving and driving and driving and driving you cannot get infinitively certain the lower bound of the uncertainty estimate that you may that you can reach is the initial uncertainty of the vehicle so if you if you start somewhere and when the robot took the first observation it had let's say an initial uncertainty doing for the first observation that it takes all future observations that it takes of landmarks and estimating the position of the landmarks can never be smaller than its initial uncertainty so if the robot starts with zero uncertainty because we fix the coordinate frame there that's fine but if let's say we fix it here theot drives for a few meters has an certain uncertainty accumulated takes the first observation we can actually show that none of the estimates of the uncertainty um will actually go below that initial uncertainty because you can't just by observing something you can't get more certain than the initial uncertainty that the system had if there's no additional information so no nothing which relates your previous poses or gives you with with new obs obervations or no GPS which kind of gives you a ground or an estimate relative to an external frame which can see there as an additional observation but as soon as you observe landmarks you can't get more certain than your initial uncertainty and finally I would like to show a few very let's say famous data sets which are used for slam and EKF slam so what you see here is a vehicle which has a laser range finder um here it's installed in front of the car that's at the uh Victoria Park at University of Sydney and this was one of the first kind of large scale um data sets that have been um recorded and used in the community and basically I don't want to say every um slam paper which does Landmark slam um shows the Victoria Park data set but really substantial amount um so this is kind of a video on top of the vehicle the people driving here through Victoria Park and they only observe those trees so they we have a a detector on the in the laser range scan which basically fits a circle into um end points and then say okay that's a landmark and you may do data stations based on the width of the trunk or the radius of the trunk of the tree and so they just drive through Victoria Park and map those trees and so the final map is the location of those trees you can see here the trajectory these are observation these are kind of um tree estimates and this is a typical EKF estimate that you get and you can actually overlay that with a um satellite image and you can see here these are the locations of the estimate of the tree this is a trajectory some trees some estimates here are far off that maybe that there was some something standing there which was identified as a tree or um there a new tree has been planted between the time of the satellite or aerial image was taken and the location was taken there are some which are here simply off so these are probably the of these two trees you can see actually there are errors in the estimate um but that's kind of one of those very very famous Lam data sets other people tried to get kind of ground throughs Maps so that's always the challenging task how good is my map how how well can I evaluate it because I'm I'm doing a mapping algorithm and I want to know how accurate the mapping algorithm so people tried to build up kind of ground TRS or ground TRS like data sets so where you know where the Larks are located and so some people Ed motion capture systems but they're typically quite Limited in the in the size of the environment and one interesting thing or kind of nice nice idea was done by John Leonard and Matthew Walter at MIT they um they have these tennis courts which are extremely accurately measured for these for these tournaments and they just um used um these things U they used for hden love I don't know I have no idea what the English term for that is and so they also kind of normalized and very quite accurately uh rated and they put it at special locations at those lines at the tennis court at several tennis courts then drove around with the robot so you get those features and you know quite accurately where they are it's definitely not ground truth but it's closer than most other data sets that one has and this is an example so that's the original trajectory that the robot took this is actually the estimate of the system here estimating um these poles on the tennis courts so kind of two typical data set especially the Victoria data set is really one you find in a large number of um of papers on EKF based slam okay a few words about the complexity um said last week in general we have a cubic complexity but the cubic complexity only depends on the dimensionality of the measurement so this is typically not the limiting factor the limiting factor is the number of landmarks that this system can handle the prediction step can be done in linear time but the update step is quadratic in the uh um in the number of landmarks and also the memory consumption is quadratic because I need to store my matrices and therefore I typically end up with a quadratic complexity in memory and in um computational requirements and this is a problem for the EKF for building really large scale Maps because um if your the number of landmark grows and you building let's say really large scale maps with Millions maybe millions of features this really becomes a limiting factor and the approach is actually not appli aable anymore in practice so to summarize EKF slam so it was kind of the first solution to the slam problem in the robotics Community started in the in the '90s beginning of the '90s approximatively um they have been conversion proofs for the gaan linear case and um it also has been shown that the filter is quite likely to diverge if the nonlinearities are large or substantial and the real world the real world is nonlinear it always depends on your exact sensor setup on the motion of the vehicle how bad the effect is but um yeah the world is simply nonlinear and the um the the linearization of the function may help but can be critical especially if you're the uncertainty is very large so the larger the uncertainty the the bigger the effects of the linearization and the fact that the resulting distribution is actually not Gauss anymore but the EKF is assumes them to be Gan um one of the limitations is that the system cannot handle well ambiguities because the gaus distribution is just a single mode so it's not that we have a bodal distribution we say the robot is either here or here and we are pretty certain that it's either here or here and nowhere else it's something you cannot model with the common filter due to its limitations of handling only a single mode um nevertheless the system has been successfully applied to let's say medium scale environments or environments where you can um Place landmarks certain landmarks which can perfectly be identified um and you have certain guarantees on the security of the system so for example the Sydney the Sydney Harbor but a harbor in Australia I don't know exactly which harbor it is is completely automated by this huge TRS unloading ships and they all running the cment filter for um localization um it's highly engineered so there are certain landmarks placed in the environment but actually these these trucks and crayons are are really really huge and they operated completely without any operator and from Sydney so it's kind of 400 500 kilometers away from the actual Harbor and everything is done remotely by just monitoring whatever the common filter it's much more than a common filter but the localization system itself works on that Calon filter so it has been used for a lot of relevant applications has been used in um industrial setups it's a very successful tool and especially if you can control your environment in the sense that you can place landmarks you can guarantee a certain density of landmarks and things like this it becomes a really powerful um technique to use um and if you look to the research Community kind of there are not that many EKF based system which are published now over the last years because there are other techniques about which we learn the course as well which um seem to offer advantages in a lot of uh for a lot of problems um but a big um or for a long period of time one research problem for EKF based slam was how do we find good approximation so that I don't lose much and have a whatever linear time complexity or linear memory complexity or um whatever nlog n or there are different variants on how we can actually do that and one of the approaches actually to not maintain a big huge map but maintain multiple local maps and then kind of only correct those local maps and then build attach those local Maps or Stitch those local Maps together so there's a large number of so called sub mapping techniques which have been proposed all with the goal of reducing the computational um problems that the EKF has for large scale Maps so whatever I presented here today you actually find in chapter 10 of the probabilistic robotics book the notation is more or less exactly the same the only difference is that they introduce for landmarks not a two-dimensional Landmark but a three-dimensional Landmark where the three dimension the third dimension is a data Association so in kind of an index but it kind of blows up these matrices even more so I kind of skipped that so if you look into the book don't get confused that the Lark they have an additional Dimension which is kind of the ID um which kind of simplifies things maybe a little bit um in the implementation but um yeah was even was even harder or more advanced to present here here on the slides on the backboard therefore I decided to kind of leave that out but the rest of the notation should be done in the same way as it was done in the probalistic robotics book okay that's it from my side for today are there any questions so you should all feel comfortable with what you have seen here by next week uh you will get the new homework sheets and building an EKF based system or parts of that so we will provide some infrastructure but implementing the key part of the algorithm here will be part of the homework assignment next week okay that's it from my side thank you very much and we see each other next Monday thanks
|
SLAM_Course_2013
|
SLAM_Course_07_Extended_Information_Filter_201314_Cyrill_Stachniss.txt
|
so the extended information filter is a variant of the Kalman filter again but compared to the UK if it doesn't really address the linearization as the issue and it just performed the computations in a different space so if he again Reva the the Gaussian distribution the typical Gaussian distribution is described by its two moments the mean and the covariance matrix so if you look to the equation for the gap for the Gaussian which is this one the the parameters and they are the free parameters is the mean and is the covariance matrix and you describe that based on the mean and the covariance this is what is called the moment form or the standard form of the Gaussian distribution that's something that you probably all know what you may not know is that there's an alternative way to describe that which is typically called canonical form or canonical parameterization or information form two expressions you find and this is just an alternative way of describing a Gaussian distribution it again uses a matrix it again uses a vector but it's not the mean and it's not the covariance matrix instead it uses so called information matrix which is the inverse of the covariance matrix so the information matrix is typically called Omega and it's just executes Omega and you have an information vector X I again a matrix and an information vector I was a little bit too fast and the information matrix is given by the inverse of the covariance matrix and the information vector is given by the information matrix times the mean and based on this parameterization it's just an alternative parameterization I can also represent the Gaussian distribution and can describe exactly the same probability distribution except that I don't have a mean that I use and I don't have a covariance matrix that I use I have an information matrix and an information vector that I use but I can use this in a very very similar way or exactly the same way however if you operate in information spaces some of the things that are difficult the normal form become easy but something could just be easy before become more difficult so you may say I don't win anything because if one thing is what's easy one thing was hard I refer you to modernization and conditioning it's exactly the other way around in the other space but it can be an advantage depending on what which operation you execute more often or if you have certain properties that you want to exploit it may make sense to move to the information space and not operate in the regular space the important thing to notice that you can perfectly convert between the information space and between the moment space so if you want to compute your moments or your covariance matrix and your mean you just need to invert the information matrix and multiply the inverted information matrix of the covariance matrix times information vector and the other way around if you want to compute omegas the information matrix you just need to invert the covariance matrix and if you want to compute the information vector you just need to multiply the information matrix times the mean so you can convert forth them back between both representations what's the cost of converting those representations so if I have moment form one of the information form the other way around what the the computational cost of doing that exactly so the the important operations that we have to invert a matrix which is cubic in n to the power of three mavey is a trip in the most trivial inverting algorithm if you improved inverting algorithms you are as you said correctly 2 to the N to the power of 2.4 but again it's still in the class of cubic complexity so that's important converting between those representations something which is very costly so there's nothing we typically want to do we can do it but to cost the operation okay so the first thing I would like to do is derive how the callus is the Gaussian information form looks like actually tell you that it is exactly the same therefore we start with the Gaussian distribution so this is definition of the Gaussian distribution here over here and we now want to change the expression over here so that we end up with our distribution in information form so the first thing we do is we take our expression here and expand all these multiplications which are involved in here so we end up in exactly this expression so whatever is underlined in red and the previous line is templates expression which lead to the change so what we doing here if we have 0.5 x times the information matrix times X so this will be exactly this term over here and so on and so forth it's just standard multiplications and whatever one transformation you transposition that you need to do in order to combine X Sigma to the power of 1 of minus 1 mu and the mu on the on the other side but that's kind of that's easy easy so we end up is exactly this term over here the next thing we do is we kind of split this function over here so 1 f 2 2 times the exponential of this term up to here and times X of this guy over here that's something we can also do so we end up with exactly this term so this guy corresponds this term over here and this guy corresponds to this term over here so what is special about this term over here if you look to this to it so this one over here is there's something which is notable so which quantities that this term depend on yes exactly so it's independent of X so independent of our variable so it's basically constant so what we do is we combine this constant over here and with this term away because it's just a constant so we can rewrite that and put this line over here just into my constant because just a constant and I care about it and so the thing which remains the second line is the second part over here okay so now let's have a closer look to to the individual elements in here so this guy over here Sigma to the power of minus 1 the inverse of the covariance matrix was what was the information matrix so we can replace this guy over here by Omega and this term over here which is the again the information matrix times the mean which was exactly the was the information vector so I can replace it to him by the information vector if I do that except they end up in this form over here does this lead this expression leads so this expression this expression leads to this expression and that's it this is my parameterization in the canonical form one information form so I have my X which is multiplied here there's actually a quadratic form something notable plus the transport sector vector X multiplied by the information vector so what we end up having is a do-er representation so this is what the information were canonical parameter ization looks like again this was kind of our our constant this is the constant which came from the Gaussian and so now exactly we have our form here and this is our canonical representation on the bottom we have the moment representation as you all know it so it's a dual representation and we just did the derivation and it's exactly expresses exactly the same distribution so there's no change in the distribution itself whatsoever notable and they said before that some things which are easy in the moment parameterization get more tricky in information form and exactly the other way around and this is becomes clear if you look to the marginalization conditioning of these distributions so what you see here is kind of the covariance form or the information form so kind of the standard form and canonical parameterization we looked into this already before I think in this second or third lecture so if I were to marginalize out a variable in covariance form in the standard form that's really easy I just need to cut out a part of the covariance matrix and a part of my Victor it's just kind of cutting this out taking this out it's trivially done however for the conditioning the problem is I need to take out a part of the matrix actually of the covariance matrix the part which mr. covariance matrix on B and invert that and this inversion is a costly operation so conditioning is expensive in the standard form but marginalization is easy and for the information form it's exactly the other way around the reason why this is the case is that I already have this matrix which is this inverted block in my information form available because I already operate in the inverted space so in this case the marginalization is costly because I have here to invert the information matrix but therefore the conditioning is is really easy so and now they saw that depending on what you want to do one or the other representation may be advantageous ok so again these two things are trivial and these two things are computationally expensive I mean still it's not difficult to do that but it's computationally very costly because you made you to invert potentially very large matrices okay we can now use this similar as we did it for the for the unscented Kalman filter - to revisit the common filter algorithm and say how do we need to change the Kalman filter algorithm to operate in information form again these are just two different parameters ations for a gaussian so they have the same expressiveness but one thing is more expensive in one form and one thing is more expressive in the other form and so we take the standard Kalman filter that we used introduced in Chapter four and now want to bring this carbon filter into an information filter so move from common filtering to information filtering that means doing all the computations not using the mean and the variance matrix but the information vector and the information matrix okay so again we start with the first part over here and again first part at least I would like to do on the blackboard together with you so let's start with line three that's what we have so we had let's write down eighty previous belief a trans transpose T plus or now let's move to information form if you want to do app everything in information form how do we how do we end up with the information form so this is this guy the power of minus 1 according to the definition so what is it yes it's actually easier than that just copy the creation down here t minus 180 transpose plus RT ^ -1 otherwise we have this humming here so what you wanted to do I would only work if it were just a product of matrices Akane can't do that with some and this guy doesn't exist in information form what do have you replace it with so this turns into a trans post Omega this is actually not far okay without bar t minus 1 but inverted a trans post plus RT ^ -1 that's it and that's exactly what we have there so we can compute the information the predicted information matrix after the prediction step exactly by this operations here again that's costly we need to invert our information matrix to do that and we need to invert the resulting expression here so that's in costly operation you want the same thing the same thing for the information vector as well so we have Scott sign which is the information vector which was given how was how was the what was the definition of the information vector if you forgot it let's go back okay we have to move back for the information back door was the covariance matrix converted times the mean so we can already write that as information matrix times the mean minus one of course it's previous time step okay so this is fine so we have to can keep our information matrix from the previous time step what was the mean how was the mean computed in the Kalman filter what's the matrix a and matrix B involved was a linear function random guessing a times minus one yeah plus B times u T exactly so this was the expression to turn the the previous belief into the previous mean into the predicted mean we don't have the mean in here so we have to do another step take Marty minus one a and the mean the mean was defined as the inverse information matrix times the information vector so we have to replace this guy because we don't have me the mean we only over the information form so this is t minus 1 to the power of minus 1 times t minus 1 also the information vector the previous point in time plus bt u t the time index here be u teeth that's it we're able to do that again you need to invert our information matrix again costly operation so how does this change okay we now can do we can actually just write down the the first two lines of the information filter algorithm how does this compare to the common filter what we have done here what was a computational complexity of the prediction step of the carbon filter um it was linear in the slam case because the year the prediction step only affected a constant subset of the variables in general it involves these matrix multiplications on the large scale and so if the if the kind of the transition effects all the elements then this will lead to a quadratic cost was only linear for the for EKF slam because there we had the the the the fact that the robot motion only affects the the first three dimensions and not the others and then we just needed to compute the correlation between those three variables and all the landmarks which gave us linear complexity so in slamming is linear but in general it's quadratic but again here this is not quadratic that's cubic so what we learn from from from this algorithm or from the first two lines of this of the information field that algorithm is that the breakthrough step becomes more costly in information form okay let's look to the correction step the corrector step is derived in a slightly different way and of course it's the easier way for doing that and we can just say okay the prediction step was corresponding to the prediction step in the in the Bayes filter because we're using the base filter framework here and that the the the final belief at time T was computed by multiplying the the observation model with the predicted belief the predicted belief we computed already on the loop on the previous slide so we can actually use this directly and express it so we have a normalization constant times this was the the Gaussian of the observation model if you remember correctly so there's C times X was exactly the mapping from the current state to the information space so this was the predicted observation the difference between the obtained observation the predicted observation so this this guy here corresponds to this line over here and that is our pretty our predicted belief okay so what we can do is we first is kind of we combine those two terms that is our equation the next thing is we can do is we can just do the multiplication getting rid of all the brackets familiar before so we get this very very long expression and now we can simply sort them recording two elements which involve computing so again constant terms as we did it before so the constant terms kind of disappear now so those which only like the term the mean cover information matrix mean this term is again hidden therefore we have a new constant here to today to prime twice primed and then we can actually group those elements those which depend on only on X and those which are quadratic form in X so every if X times matrix times X so we can actually group them together and obtain a form which is minus 1/2 X transposed times a matrix times X and plus X times the vector and if we now look to the definition of the information form we could say ah that's easy so this must be the information matrix this block over here and this must be the information vector so it's kind of the very very very simple way of deriving that we said okay we start with what we know from the gaussian world we just rearranged individual terms group those terms so we say there's something where X multiplied with the matrix times X and some other term weights X times vector given the analogy we know from the information form how the information forms build up we can directly see okay this must then be the information matrix and this must be the information vector so from this line here we can we can then derive that the information matrix is computed as C transpose so this was the mapping which maps C was the mapping which maps from the states to the observations times the inverse of the uncertainty of our sensor times C plus our predicted belief and here again the term only that we have here the information vector and here the our observation so what you see in here the only matrix that we have to invert in this term in this equation over here is actually this q and q was as just the dimensionality of our observations so if we think about large-scale estimation problems like slam that's a constant because this is a constant size so the only thing we need to do in here is we have some matrix multiplications we need to carry out so that's a step which we can do very very efficiently so we can though put those two lines in the down here there's the prediction step and the correction step there's no explicit computation of the common gain in here but again we have this weighing of the what we knew be what we are predicted belief combined with our with the uncertainty of the observation so we come up with this very compact formula we can now see in here there's no large matrix that I need to invert down here that means the computational cost of the correction step is no lot lower compared to the common filter so if we have we have the prediction step in the correction step and the differences can be discussed this basically before in the slides is that the Kalman filter was cheap in the prediction and expensive in the correction step and now for the information filter exactly the other way around it's expensive in the prediction step but sufficient in the correction step and depending on how often I need to do one step or what properties I have of those matrices one can exploit the one of the other formulation to be more efficient the other thing which is the cases so the as you said before or as was said correctly inverting a matrix is expensive inverting a matrix is only expensive if this matrix is dense this matrix is very sparse it means the majority of elements are zero you can do more efficient techniques for inverting matrices it's actually linear in the number roughly linear in the number of nonzero elements if it is a very sparse matrix and does something will we will see it free in the next week is that depending on which form I'm operating on some matrices are sparse or nearly sparse I can make them sparse without changing the matrix too much and this is something I can additionally exploit that then even inverting a matrix can be compared ibly cheap it has just property that we will see which hopefully slam problem on how the information matrix bhop behaves and how the covariance matrix behave and we can actually show that covariance matrix gets dense the information matrix is approximatively sparse or can be so they are very tiny elements can approximate them by zero and then be more efficient in doing some things and they send leads to the sparse extended information filter which we will discuss next week or maybe we need two weeks for that because a little bit more involved but that's kind of one way for exploiting this information space and saying I have a certain application and certain operations for my applications are costly or cheap and then I choose the form in this case the information form which better supports that okay so this again the complexities written down a more formal way again this is a star over here which says this can be done potentially faster depending on the especially for the application we are looking here in this course which was slime or EKF slam then this step is linear and not a quadratic in the number of elements but the reason was again only that the the update only affected a small number of the states okay extended information filter that's clearly the next step so we had our information filter as we had the Kalman filter again we have the problem of nonlinear functions so what we actually need to do is go a step further and do the extended information filter okay so excellent information filter is the kind of dual representation of the extended Kalman filter and so what changed from going from the Kalman filter to the extended Kalman filter was that we are allowing our nonlinear function G and our nonlinear function H to make the mapping for the prediction step and for the correction step and we linearize them and this is exactly what the extended information filter browsers as well so we have our function G which we approximated by using our linearization point the previous mean and our Jacobian and then see how far we are away from the Jacobian so this is a linear function and the same for the observation function we approximated this nonlinear function by evaluating it this function at the previous at the linearization point which is the predicted belief at time T and then have the Jacobian here so it's dacovian G and Jacobian Asian this way we have linear functions so we just look which steps of the Kalman filter changed going from the Kalman filter to the extended Kalman filter and see where we need to make those changes for the extended information filter as well so we have seen that these the Jacobian G replaced this matrix a so if you look down here to the e ifx and information filter X exactly we just need to replace the a the matrix G so that was easy it gets a little bit more involved for the nonlinear function the problem is that the nonlinear function requires the linearization point which was the previous mean and it can't deal with an information vector as an input unless we can smartly rewrite our function but that should be pretty impossible so we need as input the previous mean in order to make the prediction step with our nonlinear function as a result of that we need if we if we do this in information space we first need to recompute the mean by inverting our information matrix times the times the information vector in order to obtain the mean estimate propagated through this nonlinear function G and then multiply it with the information matrix so you can replace this so from the extended kalman filter to the external information filter we have now we first need to recover the the the mean then map the mean to cut the predicted mean and then convert it back to information spaces and that's a problem in here already because we need to again invert our information matrix so some of the gain that we had before we lost now in this extended information filter of course given if we would have a way to propagate the information vector more efficiently we could do that but if you let's say do the standard transition from the extended Kalman filter to the extant information filter we have the problem that we have to have the inverted information matrix available here and if this inversion is expensive that's critical as I said before as we will see next week they are ways or there are certain situations where we can approximate it by a matrix which we can invert faster but if this is not the case this step also gets very expensive and the correction step for the extent information filter we can derive exactly in the same way as we did this for the information filter so we take our our final belief is the observation model time the predicted belief we can do exactly the same operation that we did before we group those elements throw away everything which is constant or move it outside and then come up with an information matrix and an information vector so we start we don't have here so this term here replaced these CTE times XT just now the linearized function and the same in here and we need to multiply this is multiplied already at so this is the prediction at the observation model and the previous belief and then we could do exactly the same operation that we just did before grouping those elements and then we end up that this is our information matrix and our information vector that we need to have and again here we have the problem in order to execute to get this the tool to make this transformation through the nonlinear function we again need to have the predictive mean so that mean that we computed before we still need that and therefore the extended information filter has the problem that both steps actually need the mean estimate we need to make sure we have that mean estimate and this is then finally the extended information filter which is kind of a dual representation for the extended Kalman filter that operates in information space the only thing we need to do we have to kind of move back to the moment form in order to use our nonlinear functions because our nonlinear functions are defined in the moment space in the space of moments and not in information space that's kind of one thing where this why this doesn't look as clean as the basic information filter was looking before because we have to make this step force index between those forms so if we just compare the external information filter and the extended Kalman filter again the complexity is between the prediction and the correction step I say differ here they can differ if I can recover the mean and inefficient otherwise I turn into problems that the ball steps actually become expensive it has however the same expressiveness so if you if you can do these operations and you don't care about your timing the information filter gives you the same result then the extended information filter gives you the same result and the extended Kalman filter it's reported to be in some situations numerically more stable although other people doubt that so there's some people can mean it's better but the difference is really really small and therefore the most people stick with the EKF form because it we simply don't have to convert back between those spaces so the in standard extended information filter is typically not used at least for solving slam or mapping problems but there are extensions of the extended information filter and there's a sparse extended information filter I will talk about next week the reason why I introduced the filter here which make some approximations to some of the steps in here in this way is able to derive a very efficient algorithm that we can actually use to address the swamp problem okay so to summarize what you've seen in this second lecture here is that Gaussian can not only be represented in the moment form they can all they also use canonical representation or the information form to represent the same distribution and we can use this then to do information filtering so it means filtering in information form or filtering in information form we can have derived from the Kalman filter the information filter and from the extended Kalman filter the extended information filter and so if we compare those one of the step can be more efficient than the other step but in some they have the same complexity because I need to if I need to do both steps because the same same amount of times so the application really tells us which system is more suitable for us or which one is easier to maintain does it make sense to change into information form that really depends on the application and again if we want to read more about that probably seek robotics book 3.5 has an in-depth into an introduction into the extended information filter and then which is kind of the basic knowledge that we will need next week for going into a be sports extended information filter so whenever your today saying you all this information filtering was not completely clear to me revisited slides or maybe reread this article or if if the question asked me or Fabrizio or arena whoever is here to help you because should be a kind of information filled room should be pretty clear to you because next week sports extent information filtering will get a little bit more mathematical a little bit more tricky and in some of those details so it is worse to make sure that you know what in acts and information for theirs and how it works that's it from my side for today I'm looking forward to see you next week and have a nice week all right
|
SLAM_Course_2013
|
SLAM_Course_15_Least_Squares_SLAM_Cyrill_Stachniss.txt
|
okay so welcome to the today's course on is within localization and mapping and the planets today to apply the techniques we produced weekly least square error minimization using Gaussian to be two uses to address my problem and come up with maximum likelihood estimation of the core the great in terms of this is a robot and putting HDNet parts whereas we are going to look into the light bulb case next week and do a slight variances today so we are within the third main plant paradigm rapid LED also be used in a few seconds why this is called graphite slamming not slam using these squares or something like that so graphically so there is one of the most frequently used terms in this context and the baqia here is to use the error minimization approach me to use which is these squares approach and it's one approach for computing a solution for over to the system that means we have more equations than unknown in our situation here the equations result from four measurements of homodimers the information and the unknowns are the positions of in the environment features positions or features in the environment and the community of this approach this to have an elephant unit to try to find the minimum of this error function and minimize actually the squared errors or sum of the squared errors of the individual crews these individual terms as well if you use today arabela music corresponds to individual measurements or to individual movements it's kind of a standard approach to a lot of problems so if you move up different disciplines and today we look into kind of the application of this approach to the same problem and what it impulses so which of the structure can we exploit in the context of the slam colony in order to make this an efficient approach firstly is it to this experiment at least spare an error minimization or very very large problems and we don't exploit certain properties that we have in the context to slam problem this problem can be to occasionally very very demanding and that what makes sense to explore some of the properties and one of the properties we're going to explore two gears Sparsit matrix and turbulence parsons already in the context of the most extended information filters a few weeks back and we will exploit something similar in here in order to compute okay so why is it called graph-based slap the reason for this is that the main visualization we can actually use is a gruff to tubular spray the product and so you can consider a robot that moves for the environment and every one time the robot model tostone posed with a note those poses are here these triangles the robot carries out equipment without motion element and therefore kind of creates a link across answer that problem and this edge can be seen as a spatial constraint this constraint constrains the relative configuration of these posters of course this information is uncertain so affected by mice and so it'd be sorry if you know where this pose is that doesn't mean the exactly know where this opposes but we know approximately where it is so it's kind of what results from the ghost model and also the most remote of all sorts uncertain impulses so kind of soft constrained so we have anomaly information and this Adama tree information gives me kind of kind of like a list of notes or like a list in my garage ready for this however if there are rules to the environment revisits part of the environment we can actually relate its current post with poses the robot has been before and these relations to be done based on measurements that mean for example in this situation here the row would observe the same part of the environment as it was observing manacles here of course these locations are especially close and therefore it can generate these circles constraints or self constraints from observations you say I supposed to be for example this pose over here should be one leader cells of this post which he does despite relating observations that we got at this location as well as location above in order to find these sort of constraints for making observations so the difference in here is this kind of graph I'm using here that there are no features so you can formulate this rough you can formulate this drop as well using features but the deformation we use in here does not these features so the reason why it's called roughly snails bit with this stretcher roll moving from the environment generating notes apartment clothes and generating edges for every laundry information and edges for observation that related to closest you're taking graphic on a slab so against the use this graph to help in a nice visual way represent our problem and every node in the graph corresponds to position of the robot at certain point in time and every edge represents a so-called special constraint or soft constraint on a softball that simply relate to closest with an uncertainty with each other and the key task to become top draft a slam is now first to build up the graph and second to find the configuration of those nodes so that the error that is introduced by this constraint or software Spencer edges is minimized and here we seek to minimize the squared error so if you relate it to what we have in used last week you can see that these functions F we use will be executive functions to represent the oscillations the dormitory information and this ability or error function we'll go to the river that so where's the math so the thing is there's no math exclusively here at the moment but you may remember what we did before Christmas when we said once I know all the poses of the robber accurately met it was very easy person has to be apply the method is an orthosis flavor to the maths work to maintain individual color to the front real anger if I go forever case so what's also completely fist and Morris there so there no in this math we could also introduce those features as additional variables would be really no problem at all since we personally are because they were thought was very mess you can we don't want to have a night nurse in there and then we use this Graphisoft also referred as full of personality in courses and mathematically marginalized all features the poses in there plus the estimated position of the posters and maybe you can recover math we're events it's very easy to accept in the mathematical Society meetings so if you go for the future case you would typically or often leave those pictures we looking the future is actually next week so you will see both sides any further questions okay so how does this look like in practice so he said everything else first ones to the postman robot has been during video in that position and so if you go for these are they slammed maybe they'll store through roll men men in that note every this graph every node pointer to data structure Mystica spectra also stores the laser measurements what are they hidden if I have configuration of those notes I can actually map in with no posters using the post of the graph and the neighbor measurements associated to these individual notes and render man and if I do that for automatically information for a little bit rather I will end up with a map which works like this so even if you don't know what they murdered looks like that's actually one of the production holes and kind of sport you can identify that there are some parts of the environment which were very similar in reality these parts of Liam are on exactly the same place is just the orthography information of the rollerball so noisy they created the same rule because what wasn't there twice he created these rumors two different positions in the met although in reality they refer to a new same pose ok so let's say this is an example and try to let's see where the process which will guide us when we build up that route and then optimize that route so how does this problem I would look like this way can see here for the nose a very small economy see them so you may see the edges so we have normal travel on the room I belong here because here and here and in this room again and you can see here on these kind of long lines connecting the trajectory of the robot and these are exhibiting those constraints where the road can relate to observation specific with each other and say hey this position already is very close to this division of there and then introduces basic constraints to read them and so for those so with all these anger there's a resident transformation associated with these edits which tells us both of her eyes would be relative to node number J let's say fifty one meter apart so what we aiming up fixing is finding mutations for all these nodes along the trajectory so that the error of use planets of strengths is minimized so the constraints are done by finding correspondences between nodes yeseong is based on an eighth edition yes so you can't miss also another another later so the key thing of this post route that you model I thought that features because speaking of features because in some situations like equipments they don't really want to have features or I don't therefore we only restrict to the pulses so you can see the polls or if it's kind of simplifies our understanding you can see it imposed as something which carries a local map just upon that is a data store in there and then you can simply say that Eiffel note here in a second note here and then look into the local maps and if their local maps looks very very similar or identical it's very likely that they refer to the same place then you can create by matching these two maps you can actually generate all those constraints that's exactly what these constraints do so the features or the map information can be inherently in that representation as you store the raw sends oscillations that relate them for the matching process but they're not explicitly used to the optimization and since we use this graph for the optimization they are not in this graph as individual notes as I said here you may have reasons to do that you may have also reasons to do the other way around and next week we see an example where it makes sense that we store them to come up with better any further questions okay so we gonna be make sure to get the man in the background for you because once we have that breath we don't really need this kind of map that we have drawn here we're sitting a just surrendered breath so we can actually separate that graph this is no I graph and now we're just considering this ground and trying to find new configurations of these notes so that the error is minimized that doesn't mean that these edges need to be having zero length because thrown through a rule may have mean at different locations but typically those locations are false and so every note the is relative transformation attention but since Thor what's needs to observe the same kind of being Berman those lines with these truly smaller short which kind of is similar to the visibility range from the scale so if I optimize this and try to find the most likely that it makes you look like this so these points in here doesn't mean that there's this kind of error between those two trajectories the destinies for example once world was driving the right-hand side of the corridor and wants to try the left-hand side of the corridor will still be able to relate those observations but general is small lines in between there's a transformation in cash which says this post would be for example two meters apart from the other posts and if you take this result we obtained from our annihilation approach and Mallory can read room at this map in this case looks like this it's actually very close to the correct configuration of the see all these different blocks in here besides different robots produced different tools you know these robust the road which was riding around making this part of the environment it's a the security of rough basement so we build up that graph and then we try to find a configuration of that graph that minimizes the error that is introduced by the constraints that relate to nodes in the graph with each other once we have those pulses the mapping problem this is pretty easy because you can just imagine with all these great so it's kind of the overall picture of that cipher but it had in practice if you implement that you - yep - I think electronic components you also have the so-called back-end it's my back end and cell for that but the the first between those is that these web front-end front-end takes as input the raw sensor information we have and the current configuration of that breath the word then does it keeps a current observation and tries to relate it with other observation from have seen so far so this is the the component is actually built up the ground and tries to identify those interests so that the robot clearly is relative to a position where the rover boss has been 10 minutes ago for example and the the front end has nothing else than trying to tie these constraints these edges was it powered some of these edges gives this grass the most of the edges to the so-called better than the bank Minister Pravin optimization of paper and this will help you and then used optimization framework actually optimize this graph which means we try to find new configurations of these nodes so the error which has been introduced by the constrains is minimized this was kind of lead the process I've shown before here and then the groundwork reports back to those positions to the to the content and then the front end clinic see explore this new map or this paragraph to make hopefully better data series so the process was iterates between the front end and the back end the front end tries to find the strengths the bankers user sees for train constraints to build a map in the maths report back to the front end because from the man has a bad better madness to meditate to potentially better gain associations report than that was for the firm thankfully and what we are looking at today is only this horrible file so we are looking to the the optimization part we build later on also use a few examples and while you could realize the content the for them to be heavily depends on your sensor information if the camera relays are scared if you have some more or dvd Center something like this this traumatic in chambers of conduct it typically has only a very little impact on the background you may need different internal data structure to represent your problem or you need to read you need to many to define your narrow Frankie so what does it mean to measure measure something but there's a very little changes each meaning to the overall back answers I don't say it's independent of the sender but there was the in so changing the sense of dramatic change them from that button to the only place limited changes abandoned depending on the assumption to do above it a visibility range or the information you can expect from your sensor data you may need to change also something here the majority of changes with that so today we're looking back at assuming the Hannah graph we get any granite just to configuration of the novels is not correct right and other past is not all the tasks we addressed today to come up with a new craft correct that graph optimize that graph so we get new neural configurations which better explain the observations so that they're better in line with what through next year or so with breath let's say we have n nodes so we can describe this as one hydrogen electrode X because you just tell Cici and small X vectors and every next can for example be it's like the application of robot or XY Li your controller live in a three-dimensional world the three-dimensional space and presentations that this vector will be six dimensional the two-dimensional world two x and y plus the orientation of media and everybody's excited corresponds to a 2d or 3d transformation which represents the position of the robot of the space terms of position and orientation and when we need to specify and which one time doesn't constrain exist between these two poses Nikki told you that before a few times where the two things are connected generally constraint Donna transcends overnight exactly so don't freeze the first thing in central Mathematica that serves the same place the third thing is easy we create an ad but everything about moves from xixi plus one sixty eight o'clock and then BS corresponds to the odometry information typically the transformation is exactly the transformation given by the dhamma tree plus an uncertainty which is associated to the uncertainty in the robot smooth so this edge here between X and X the X is titles one is directly connected to our so we don't have any adult information then we may have problem because we we can easily connect all those rows together graph and then we need to heavily rely on our observations in order to actually have a connected graph if the robot has no longer to information again why it's blind for a while of course it can see we simply will have a graph which is potentially not connected that's just as so authorities the first way to the generator and it's the second one is the observations it here without going a little bit more to the details because without that concept that we kind of ignore all features already done ignore don't model all features in our graph or our local laser observations instead what we do is we actually relate through kosis with each other and we do that by introducing the so-called virtual measurement this virtual measurement is something that we don't really observe in a sense it's not a measurement connected to one specific sensor just that we use two sensors to observe the same part of the environment it's a camera image people from here the camera understating history from here and serves the same part of the scene and based on that you can say he know I know that this camera needs to be located relevant to this camera as if they won't be a part of slight change in the orientation this is called a virtual similar measurement because this camera has measured position of the other camera it's just we obtained that position to lower information we get by our department so we're talking if you go back laser example this is the pulse excited this is clear example the map obtained from rendering a local map given me sense of solution so here was a laser scan which was scanning is in this part of department here and obtain this map may be very soon as the case before it's date doesn't matter again if you don't match those two maps you can actually overlay them kind of scavenging that we could use very very briefly and if you do that we may come up with a situation like this in this situation the two mass of the black line of the gu9 are here over eight so they give the best possible match and under the assumption that this match is correct we can actually say okay here's position X is positioning the Saint Clement frame so we can treat these later information as general these two later expenses generating the virtual measurement that relates the closest in science James we can do the expresses by a constraint or error e IJ which means that we have seen the part of the environment from X items herbux J so constrained from I to J means we are sitting in I and looking to J if you can observe these pulses then exactly generate of this measurement that we obtain by kind of the step modeling the environment in one scanner will be run with the other skin and then relating these two environments this is kind of forward step we relate those all over the Metro step this gives us it's kind of virtual measurement between this video kind of clear the number of this measurement country codes are the same for each pose we have one measurement not necessarily I mean it can be the case but often this is not the case so we have what we do have for every two sub special models are connected by a one dimensional strength so if if impulses we have n minus 1 on the doctrinal space that's always as long as we never know observations you may also generate an observation from your current post to the next post if you observe the same part of the environment you typically do but if you revisit something you generate to be degenerating in here if you revisit the place twice you may add constraints between all previous releases so you may have noticed have the huge number of of edges connecting other nodes so if the number of edges we have strongly depends on the center and the environment there's no fixed ratio between those and imj so pushed on you explain education somehow so that they can be secrets for this expect explicitly looks into the situation when they about signature because the sequential parts to be quite easy to do more tricky as you know signature and the edge is really so the visual measurement is edit to exit hi so this it's just so it's nice it's a directed graph it's a directed graph it connects xixj it's a relic transformation as soon as there's full rank a inverted in some politics and that wait listen so is the child or eyes apparent tivity its 3d one from I to J maybe you're sitting in I observing Jake I thought II I J's the error by the measurement from sitting in I and looking to Jake so the first one of the place where you look and say what when you're looking for position on this day seen from its time but it's kind of I mean as long as you're Center it varies with virtual Center SS full rank you can actually convert with transformations because of all that the other way around so they are smoke this is not done has to be like this callosity part would you find the other way around talking about the real-time things you just have or odometry the observation everything but to productive produces post-grad less is not a real-time operation what are you existing in the spirit of equations we do observation and they can drinking at the same time and that I'm thinking about how long we can keep them explain the memory and for the revisiting things and so mean the Tropicana so I think moment I don't really want to talk about real time it's like the speed will come to questions on how do things efficiently and how to come up with online approaches discusses later on so just as a sign of real time is something which which different community outside robotics defined in a different ways certain guarantee equation times therefore I try to work the turn real time because people use it you know some people use a completely different kind of precisely defined it you know what exits often used in a very classic way I prefer monitor online okay because you can see online they just made a small number of changes to revise your problem into you know from the problem and it just requires a limited number of steps so he bit the problem grows you still if you mention one point can be critical years in the very very basic combinations we represented here this graph grows over time because revenues point in time I generally note and potentially many edges that mean that this graph gross gross gross so if you do the very naive way of technical observation try to mention sirs every pose you had before is at least a video complexity you introduced so at some point in time you may not have enough computational resources available to do that let's say in a given time frame then we try to fix that by approximation so that probably fixing parts of the graph will come that later of course as well see the support is supposed to do for slab problems so trying to estimate full trajectory in VIP will be done for the optimization does it allow us to revise robot pulses in the past so that's something load with a okay so we had that at a virtual measurement that we generate between two hosts so there any question that moment wallet so the Fisher's house would make three mothers in the back of the rotor their loss that's stationary and every times every step a lot of virtual methods if I'm certain you will only generated with two measurements so for every really observation you obtain you may generate at least one virtual measurement but they can't do it out remember if you don't find any match you don't get anything if you find a match to one pulse in the past to create one if you find measures to ten pulses you've seen approximately 810 courses so for every all stations you obtain you need to match their against previous losses of course you can't be respect your search to be redirected services to a rat they are made for performing this much more efficient way for example by considering the uncertainty which is associated to all walls nodes you may say I just considered those area versus in the 99% probability area in which traumatic it's just our space so you don't we don't mean ever don't we mix everything against everything but you read the name so how do we represent that thing there of course different ways for represented transformations and one thing one way to do that often life or homogeneous coordinates and of course 19 you say allow you to model rotations as well as transformations in a unified framework of justice matrix and if you would like to look in a lot of tree base edge yeah the position of I and position of Jade their representatives in to use coordinates you can expresses as the inverse of this matrix X I times X I just want the proper reference frame and for for an observation base where I am ax consented to this X I inverse extent is this this kind of means how does no I it sees no J so who knows but his goodness so most of you probably so just very very so talking at one of the standard ways to represent transformation is just embarrassing one single slide tries to explain that to you try to explain a few things Betina ears but you may need to revisit just extended any other book or just google for you formation so that on the left and have a look toward using ordinary so you were leading the exercise as well so the Kia here's the young and damaged from space you express it in an N plus 1 dimensional space so you add in additional dimensions vs. and this allows you to combine rotations and translations in the same concept of this n plus 1 dimensional matrix so if you haven't let say X by V is your or as your space and you want to transfer that into it so three dimensional space it to be homogeneous coordinates what you do is you just add another dimension which is one so X Y V is becomes X Y V 1 the backward transformation is here we have the X Y Z and W reappears and you define all these dimensions by W so expert the WYD I depend on you D divided by W so W is one the executive suite so so we have we have done a vector vector in our homogeneous coordinates which is given by X Y Z and W was this big one but I think is because if we have this four dimensional vector we can express translations and rotations in with these four dimensional matrices or overflow images and translation is done by an identity matrix plus these three elements of translation exit translation by translation speed so if I multiply T times B I will take for X X plus TX y plus dy v + T y and W States was the same value this is kind of nice and easy do a translation with a multiply a matrix in the basal and still the rotation in particular the same weight so if this is your 3d rotation matrix you use the 3d tool rotation it's just you just add a row and column which is 0 plus 1 the other daughter makes me nicely tuned that it's gonna be the idea of these Muji's orders and then you don't need to really release about how this is possible to need to just work with matrices with transformations in workmen and do all the operations you need so it's kind of convenient does anyone need further expansion well this is kind of rough right here we don't go to the details of electric here but if you want to implement that what you have to do in the exercises you will need to come back to that so there any questions about that appendix point now otherwise if this is needed let us know so I am Nikolai the exercise who comes to a 10-minute revolutionist so so this is the side minus 1 plus 1 the result is the translation the it's relative movement fountain for next I to XJ so this kind of you can see this represent the position without the English represents the position of more I and this represents the position of the light was one what you do is you do the inverse transformation so that's how to come from the original to X I I thought to go from excited back to the original and then how to go from the old operation to excite us one they say in Holy Week network intuitively we met this week so this is anxiety global frame excite was one of the global frame and so this combination to see the relative transformation okay okay so we talked about this kind of transformations and right behind the moment in a way they're a flavor I'm not Moysey someone affected by Marx but this as you know not the case so all these oscillators are affected by noise and we typically or in this case here we assume a Gaussian noise as we have that because we discussed last week why we typically do that because that we use Gaussian noise or every distribution is Gaussian what does this apply our last so if he chooses that's how I successive investments and assume everything to be calcium he's he uses nice works we usually swears but what else impulsively stressful and requests optimality we have precaution so the this mobility squares obtains the mean estimate so there's direct connection between this optimization framework and probability theory if okay so we had our automation matrix coming up J that model the uncertainty of it and so the bigger that is the more this constraint matters of the optimization what's kind of scaling so this question is what is the information which is localized in case of skin many verses Obama tree how you know Dom tree would have a diagonal matrix with so everything is here except on of them about one because an adjacent nodes to yeah necessary to be the case I said in the DA material the that is connected to know if I plus 1 year-end Venice a costume etc so okay so just a bit kicks it off diagonal matrix so this really doesn't bother information major to a single constraint so you know it's just kind of the three bed three or six by six movies so it's quick when you say but this affects different matrix we look into later on so this is information made for not a single measurement so if you compare alone for you to the skin itching what is to be was very short but this was one conclusion so which information is more accurate and management dukey's scavengers more accurate what does this imply the information matrix the inverse of the covariance methods and Stroke there are certain keys yes what I did to use this yeah decibels policy and information metrics the values will be bigger for the scan mention them before their don't exactly look the deterrent would be bigger or limited to Allison typically larger for scanning because there were certain matters more in the optimization because we have more certain about the strategy prospects to constraint to me compared to how all of this information matrix look like in thermal prior through a very long corridor which is very cool features just the walls in for a connected site right Larry Coryell how will this matrix look like what the ellipse of corresponding methods matrix for damage very long and small yes it would be very narrow right and left inside of the toroidal because rule can be estimated suppose very well towards the right and left-hand side of the corridor and it's very long it's very very high uncertainty along the main axis Polidori this those les colleges would be more sickening because if if you did mothers Arturia love dr. Minyard and discounting has some problem if this measuring the nearest neighbor who then I think the other was created you know I'd say odometry its profit matrix can be here way around because lonely odometry is better so meta measuring angle with a thumb which is not as good as measure distance moved if you have the wheelbase know that yeah so the main thing I wanted to come up so this is often absolutely like that the odometry is not really impacted by what we weren't around the robot whose likeness was the drive to the same speed in the same way we office the same as committing can vary dramatically depending on what the environment looks like so in general schema key constraints tend to have higher information interest but this information make is to in strongly depends on the environment structure and it's even more more than I'd like that you actually make mistakes so just keep it in mind we often say before it's bigger so it's much better management case in every situation this was something they can relate it to the Robert last particle filter we did time for me before Christmas where is the auto focus distribute if you see that when these correct were these particles in the proposal distribution distributed along the main it was exactly this effect the other particles automatically adapted to the uncertainty because we make you approximate a decent approximation approximated the real of station right so the executive is first given us moving into a Gaussian variable Gaussian approximation but we should make sure that the uncertainty associated to scan matching actually takes the real information the environment provides me into account just say okay so when the traffic light and the kind of very small part of the graph so this is our X I and this may be all about its J so what we have we have an observation from xi2 exchange the same sizes the observation for the queen of the observation is here Beyonce saw Jason from high this year let's say this is C of certainty associated with this observation focuses on the I J if the real configuration of another real because we're a stomachs Jane are for rat configuration it's not over here at is over here between a narrator and these are the positions of the walls according to the ground and our key goal was to come up with so to minimize the sum of these we had eros iterating over all these observations in our craft and then coming up with new configurations for X I XJ so that over over speakers just apply what we talked about okay so our function looks very suitable for optimization cruises exactly the one we exploited the speed in our because you've not rhythm so we have these visual strength speed also right before contemplate the regular all constraints we have so creates two constraints and individual error vectors the information occurs when information pages and so we had lost we had our terms okay so what a state vector look like for now more concrete slab problem so we have our ought to move through the environment once state interest it's the suit it's made up of each post position I throw it so that would be 3d 60 to 80 whatever times number notice that yep so is it so what we have consists of individual blocks next one to extend and the other of these blocks first ones to be expresses the configuration of one single electron so to begin what they use in here if you are the 60 space maybe this XY be your control six parameters the movie is six and dimensional vectors you're living with two dimensional both with X Y and the orientation this was n will be n times three dimensions we put the position in space in the original space it uses home venus border to system ate them went back to these homogeneous coordinates although use it you have kind of your your matrix most of these elements in these main phase you can well although they only depend on these six values so we have X Y Z into three parameters for rotation matrix so all the overall matrix which normal parameters they depend on each other says justice system into the forward forever okay how the error function would look like this sorry you have some errors in this to make we have that it takes attitude yeah and so let's say let's see who uses you juice coordinates Serge's personnel why did it just be a donkey Oh a of the deformation mathematician we're not talking about the information matrix at the moment we just use this our way in our vector what about the arrow equivalence between the no liability for what on three or four doesn't matter or scam it expresses in a way that it doesn't Transpo's but most labor each T is transpose x east the formation between this information is so all ejected our objective times we saw this error vectors information make this error so this is one of these facts based on mathematics should be something like just a difference between two points like the so always have kind of an estimate position and position and difference between those would be yes yes it would just subtract them but consider that we are living here you can just subtract it you're living in the world of quarter of transformations so just subtracting X wisely your control is always a message to us but you were the idea for it exactly specifies what I want to have so we want to use too much use quantity on this right so you have a measurement vector is yes which is I would say yeah Adama to be all the difference between the music I don't obviously or the virtual memory so that looks that I J to the minus one times and enough use differences but that we can't do a sit-up we can look at one point from another like XJ minus one the exciting part should be xj- fungicide you have the personal it's a bit dirty so what this is is this is X JC from inside this is X they seem wrong this part this part is speed so that I J would also be X Z J we observe from X I have inserted itself this is X I and observe from X J this is because you have the inverse if I would do just that it would be explained as observe this is because it's one of those we take our current growth and one we take from our observation right this is the graph and this is our observation over yesterday but the virtual it is computed formal observations any theater the only thing so this gives us now the matrix in homogeneous coordinates which specifies the error and then we actually have to map that into what way from the movies cortisol virtual space so well just function at each other for us just imagine why am i keeping everything the curse so that you would have a woman diaspora in India's matrix you would have to specify all the individual elements of his movies makes it of this matrix movie is coordinates and at this is a reference to a state space so you have more parameters matrix has more variables then actually degree of freedom so when you optimize that you may end up in configurations which do not properly correspond to the movie score anymore so you want to go back to your minimal representation it was kind of a side note we've come to this issue later on because if he only also also use all the internet of these matrix but lets us up to no choice for some other reasons we don't there today that will come today they go on to this point what is the question about that okay so this is exactly what our refer to a plan so this T to B stands for transformation to vector vector transformation you made these dysfunctions already in some of the exercises this is exactly what they do they met from their representation for example new genius coordinates back to the minimal representation so for example XY the control or XY detail this is our virtual measurement or a real measurement something which is from what we measure and this is and so this gives us value this.value 0z reserve really the first week no translation notation which they exact dimensions so if the observation is exactly the configuration of the graph this is a graph configuration is the observation the error takes value 0 and yeah so this is just up for it RJ no specified again as the variable V we host a mentor can see here is this is the whole state vector and I only use the item Jason later on there is this constraint only thanks to poses in this whole state paper so most of the state vector is ignored in every single day everything yeah arrow edge which corresponds do you have a paper to be continued okay so we're continuing where is the Gaussian Tiger members very very brief one slight repetition few slightly petition because we're going to the planet now photography decision on the standpoint the first thing we need to do we need to define our arrow functions we can specify our references that's actually what we just did specifies how we met remember then what we need to do the next steps today there is our error function our vector and then we do that by doing a Taylor expansion for me to compute the corresponding to cocaine and then we come up with our squared error function we we then compute week then again with the first derivatives is 0 this leads to a linear system which we're going to solve and then we this solution of the linear system gives us an update for our state so how do we need to change X user to be close up to the optimum and then we iterate this procedure until convergence okay so let's go to the D test we specify our error fact you don't need to be near us so we had our function e IJ written as X plus Delta X where this was in the rotation point this is kind of a small deviation from this linearization point and here the Jacobian Jacobian is the a matrix consisting of the individual partial derivatives of the arrow after twins so the question is here you still have in mind for our arrow to the white circles go ahead with a buck one that I J so really right disrespect plates Impressionists us one of the hours for one individual Eric OPI J depend on all state variables this is true only depends on two good-looking box of variables actually those correspond to the close up that X I know this correspond to the almost extinct all others are relevant here so it's true it only depends on X I XJ given that information is here what does this opposed the structure of the Jacobian the Jacobian is first it was the first derivative party derivative of the individual dimensions of the error function derived to the individual variables we have so we do jewel dimensions of a state vector X he's licensed parts there are too many zeros what did he was 2000 is so it's right that s the error function only depends on law on excited day support derivative stores all other variables retailer integer zero so there are just two blocks which are non zero in the Jacobian for one specific error term look at that so the pocket derivative with this Jacobian of the constraint created the notes I J so J towards X this is zeros anyway except the ice block in the textbook in the ice block the day spa has read the first derivative of the error factor talk respect to this variable and this depends on the complete equation so we will need to derive this to excite or tend to stick with respect to XJ exactly this function these are two elements on here for other elements are zero persisting does depend on X I plus 1 XJ plus 5 xj custom 74 Hey so that's pretty important for us so the Eid of the whole secret donated his a sign it's Jane these are two parts two blocks of the state vector and so this is the structure of ultra coding everything which is blue consists of zeros everything else sometimes rather is a IJ CIJ there just is two non zero one of the Jacobian okay let's see what this so this is a very very important thing to note that the Jacobian is sparse why is this important briefly discussed last week well so awesome thrust J's Mars yes you notice nothing to do with initial guess because we sent it to the first derivative 0 only nervousness the Metro 6 or 2 tons yes what they dependence not directly it is corrected because we need to send it is zero so we all senator community directly to zero before sisters but what they contributed to the linear such as a matrix which used to represent a linear system as a sum of elements Jacobian multiplied with the beforehand information matrix as a result this all 0 the matrices be a alpha base is 0 everywhere except of very very few positions and this means mean we get a linear system we need to solve the most entries are 0 linear system mattresses 0 means they all contribute to the solution so it's easy to solve stately fastest or network support this becomes more clear if we go to the two tremors we need to estimate or computed for defining our linear system resources vector P interns post and the matrix age where the matrix aid consists of the sons of the individual jacobians multiplied with information matrices and the vector B vector B the sum of the transpose Arabic the matrix okay let's visualize what that means so Peter J if you look to the dimension of individual blocks so this is the Jacobian it's zero everywhere except at these two thoughts what kind of object information matrix e IJ slaughter matrix corresponding to the dimensionality of one of us individual blocks and it can be fully populated so since we have the error term in the error terms potentially also all the dimensions of the error vector are populated at once including some of these whereas already in our zero issues that we ought to be not the case so we have this one this one and this matrix so we end up in obtaining a vector which has dimension go in and divinity of the elephant would be one and which has is populated in this way so most of the elements stay zero and only 2 so 2 times the mobility often individual walk elements are nonzero so here's one by for example 3 of 6 central network years though they are nonzero and blue panels are all 0 it's great for me and of too small 3 by 1 or 6 parameters are a key non 0 okay so have a look how this is about the units of H H I J these are the individual matrices which are obtained from one single of strength so the Jacobian times dance information matrix the other catastrophe festivals so this will kind of blow up a matrix which looks like this where all elements are 0 except up four walks four walks on zero this is a mode III the Block III JJ JJ so we have the non zero on the main diagonal I and J as well as at the i j ji because and eight is symmetric that's what we discussed last time so they are here proposal okay so every constraint contributes to this bit your ve by sending two elements you know zeros and for matrix elements to be nonzero at the rest is zero we just saw all of these guys so there's a so structure where the vector B is to be fully populated what number lot of open space we have all them but for the matrix age we have the and then some constrains which kind of breed a toaster or message with each other in this sequence which also power system is farce way then this matrix has a lot of 0 elements and the linear system a nurse's office age times delta x equals minus b this ages sparse B is no sparse but this ages bars and this leads results of the paper forget you solve this linear system very efficiently even min it's very high dimensional Josephine use spots very interesting factorization awesome very very briefly sketched janeski factorization last time why it's efficient to use it to solve a linear system and process is two triangular matrices which you can get something very very efficiently and if you notice the sparse they can be computed more and therefore provides very efficiency okay so this kind of done in a visual way you can also kind of show that this is the case what comes up and this is awesome effective edge if the bee nectar the individual detector if be the error matrix x transpose x which a competent information it has to comeon so these two elements x this Jacobian these are the two nonzero elements we multiply that in we just if these two blocks which are nonzero rest zero these on view of their access you can just show that by multiplying these elements in the matrix knowing that this one is zero except though these two islands we're exactly the same for our matrix H so H fitness of the Jacobian transposed information matrix times the Jacobian so this year is a vector everything is zero except these two elements the same here and the dense deformation matrix so since one of those elements is zero all others don't belong to these four elements zero and that's exactly the way they look like so if you go under kind of build up this big matrix agent in this computation we don't need to build up all these Big Eight matrixes sum them up it's just efficient to generate those four blocks in mm2 matrix did you say that this block I J and J I should be transform a transpose yeah those two guys visited principle together all right yes yet slow swap the order and send the information matrix is symmetric doesn't matter if it's supposed to hey so this is one of the key insights you know to solve this slam program in an efficient manner with this technique and this matrix as farce and since matrixes is sparse I can actually solve a system in very so summarize it one single edge in the graph contributes only to the eye from the text block elements the vector P and these four blocks in the information pages as a result system sparse and can be solved very thing he brings out with his process okay let's look to the linear system next thing we need to do so we we said we're going to voids we're going to solve it's not X itself just the increment we head to our linearization point of X so we have the X which is all the deltas or the individual elements of our state Victoire we have all these speeds and we have all these each one of these values will be 0 some are not the reasons why they have Peaks bar and seems them from hij because they can be there's a sum over all hij this contribute to this so even if the constraints which connect this post this is sum of two hij oh it's there for this bar over here to distinguish these two variables so this is a sum over multiple elements resulting from potentially multiple multiple constraints this bar units no other meaning just to use these two variables so these are the elements these are the elements need to compute for a linear system and that's what I what I need to do in the following way to compute my to compute mass communes and then I update the coefficient vector by taking this value just adding for every constraint the error vector transpose times information edges times what wrong to community center to the days of the art element and I update these indeed my big matrix agencies workers webicon space iterator world constraints constructivism choice this is doing this is a linear in the normal constraints I do Linda seniors and as you do that in every other situation okay so the overall algorithm can be written in a very simple way to optimize why the system hasn't converged means for example I have no changes in my auto updates in my so those are obtains a very very small to be one reason to afford the optimization whatever you like in the progress HV come to the Bluebird name system and then a delta X equals all sparse means just reduces for static to do that otherwise it's very inefficient of h times delta x equals minus b then I take the the best increment given the approximations I did that's actually what look like some actually want to be homework the homework and more than one week what we met before so for every load up this whole collision system that we discussed it here said you can actually have knots message ended in the end that you can use very very effectively and that's going to start with the just gonna make a small example with wet board and illustrate one so just again to see the complete example what happens and also what the problem is that we may cover if you do it simply what we have done so far if you see we're dealt and then try to understand what the problem was so consider we have a very very very simplistic round just messes up three node X 1 X 2 and n 3 and in reality they have 1 meter away from each other we're living in a very easily doable for multiple angles and so in reality this is imported it's 0 1 & 2 so that's one dimensional so let's see how this yourself so let's say we have a initial guess which says they are all 0 so my X is 0 0 to 0 this is X 1 X 2 and X 3 now when it goes through the initial steps of the algorithm and so they begin in our languages since we're living in a 1 deep inner world which is easier than being at it before a lot of observations directly observes how X 1 for example seems X 2 and I just need to subtract it from where they are reality so it is thanks Jay - I hi this is my arrow - very easy simply this they have two observations that one - so they see each other and next two three two and three and they have to all the patients are perfect and so they those report they have one week apart from each other at the result of that I have two values for my error function and Herald one to which I need to compute the current iteration what is the error Ludington that one to leave on to is it's it's one both spaces first this one is C 1 2 which is 1 1 0 1 and sealer was one for the other way it's excited to see to focus Arabella special euro okay I need one to look and so we just had one dimensional heat up the elements in here so this D D because we have just one dimensional vectors let's you be here X long so what's that right this time sneezes respect to those x1i they have one too they derived with respect to that I this one minute there's my little supplies there was one this is that's Phi J minus XJ plus but right up to the excited one so this is 120 right up this respect to x2 yeah all that or coconut be free let it rise it perfectly x10 respect to X 2 to the 1 and hey so what well next thing I need to do there is reason age the linear system B is B G 1 2 + eat meat which turns into what - we have specified the race matrices this used in this example it's one they have one dimensional okay for what we did here so let's go to the first ones of error e 1 2 times V 2 makes matrix-12 testico here the arrow you want to list one the information matrix is also 1 as 1 times V Jacobian so 1 minus 1 0 plus 1 second one I have the rs.150 so I get as well easy here 1 minus 1 plus 1 5 minus 5 and 0 plus 5 right that's right that's good safety for each a reminder a loss coming flood suppose informations to come in go a this age 1 2 plus H 2 3 I'm a direction does this matrix have to be like we've got three big thirsty to cohere its three-dimensional vector in our case okay so what we're getting here one Pass one times one minus one your first one for one and the second warnings for this each one - sorry which one - I have next thing is Nia jacobian 0 0 1 minus 1 times 5 times 0 this will give matrices 1 minus 1 0 minus 1 1 0 0 0 0 4 out of 9 - 10 1 0 all the rest of the matrix is 0 plus 0 when I got my - - this is symmetric so this is one minus one zero - one one one four five - five zero - okay what I need to do is to solve a delta X pulse on his feet right follow this matrix Asians if you look to the first slide say the first line is a B C if all right and it must be negative note that what do I get a plus B and C this person is 1 times minus 1 minus 1 second line yes - 1 - 1 5 times minus 1 is 125 and 0 plus 0.5 minus 1 this value which is B we can see Betsy sorry maybe let's see what that is me well the native is under the authority linear systems under they find yes attachment so the problem we have is this the matrix elements will rank to lose the zeros we council executive often uses we don't get one solution but I mean this is submitted because we have that's how you set up the system right because yes to sensor measurements and you're realizing one to the other to the other so if if once we have measurement one two and two three but he doesn't know the direction in which mark city the direction it's not the issue but you couldn't be you made it perfect point with that we have the structure problem here because he constraints only tell us that we have an erection they are - plus we only have relevant constraints so we perfectly know the relative configuration of these guys but we've no idea what where they lie in our references so we can move them around when we want to and that's exactly the freedom we get from the in your system so whatever even if it will have hundreds of measurements measuring each other these various we can only determine that up to the center of the coordinate frame or to some coated frame well any our the good thing is we can very very easily fix that we set the first note to self first of all to zero and we can actually do that how do we do that how do we fix the personal to zero thanks to Koshien so we could eliminate enroll in a colony matrix that's one way the other way which is more intuitive therefore I like it you just add a constraint so we say X 1 1 X what will be 0 this is just by antecedent constraint so any to our H matrix how many nights makes this is the only basis we have plus a constraint this doesn't forces that x1 is 0 versus healthy can see this is constrained with information matrix of 1 which opposes at x1 should be 0 a network system could a well-defined system and we can ex exactly exactly this is nothing else any constraint which says the posting zero is anchored in zero zero zero it's my set off the reference praise the numbers I'm here you want to compute that at homes and employment at nicely and we had a problem that Lizzie for exploring example missus just to go straight it was walking straight into system we're gonna do we need to add one of those constraints because this means that if we fix the center of the reference rate constraint and the global reference frame which connects the X naught X 1 that should be 0 0 0 whenever you move it away whenever you move X 0 and way from the original you would get an error and therefore the UPS model is very sincere okay that's it the only thing that was missing so far between take to account you know to climate if he did then began resolved which looks like they so listen we're for example of one of our robots mapping our campus so you see the the magnet the roller fills up just taking the optimized positions in that this is the kind of zoomed in view you can see it wrote reinforces Norfolk environment gets new constraints you saw structure this optimization so safety detecting the optimization which was executed and corrected all the forces along the whole graph so we really see of snapping the madness of 680 in an optimization run where something has changed because we have new constraints introducing you in error because rollin accumulated arrows while driving through the environment and this you too we rerun that and now there's only the beginning close enough for the first time you see the biggest Corrections otherwise insisting it's difference differently updates it smaller so it happens very soon the rover re-enters here respects to earth directly little eyes that he see that the map which is smashed and gets optimized whenever you put call Zulu you will have this correction spoke while the world means to the environment we want to be kind of the ground gets stiffer it's different different configurations gets more certain more cervical surgery therefore the updates turtlebeach to be smaller why are some points further away from the sky not locked so they have a red here these are kind of hoop blowing that wall piece because it's not only optimize this also from that I'm trying to find course policies one needs which are considered for the torturers was just kind of something that wasn't the innovator you don't do scanner to get beaker so you can kind of connect emit the air or you do when you navigate you don't want to look through your whole map just part where you could be in theory so you could simply take just a dormitory and estimate this may the 3 Sigma bar and a map so this was just an asset before are we doing here after every optimization step you just take the loop that poses of the robot and demanding with non-postal internet and everyone done in this case the completely regenerate favorite map you just displayed in the video so once I know all pulses mapping it's easy it's gonna be key they're anything like a sheep okay so to be good grantees today is one way to address the rough basement problem in the form of aspects of the optimization of a building because you've never because we're not going to be able to use to discuss indeed there last week and this week hello the height of Tuzla improvement could have looked into the properties that it has so discourse matrix which allows us to be sort of effectively efficiently this especially in this matrix H is sparse it is sparse because the error that can only relate small number of moles visit up with each other in the second reason why this forest is because from every quantum environment we can only see a small oval area if you would have a center which always see the whole world whenever it constraints to all pulses it would simply have the same system but much much much more constraints so the full matrix H would be dense and then it wouldn't be a visionary so it's really very probable specific instances will kind of do the visibility range of ours camera depending on how open our environment is how hard can we look what with walls we come through this makes it for many difference on how effectively for peace a small cyclone for next exercise we should actually implement that see because the true considering it to be grab just permit rights with three parameters X Y and verification okay expresses as homogeneous transformation using this function vector to transition transformation sorry it's provided in the MATLAB toolbox this is here our music there are to be discussed you today you should consider the comience and if you want to compute this matrix for this rotation matrix of the observations I and J the individual one the transformation to this is exactly how this transformation it looks like so the best kind of a few hints if you do the exercise these are these predators you should use the Goldust two people have exactly this neck and appliance cousin optimizes the graph getting into trafficking drugs which minimizes the spirit if you need more detail so you might get a lot of it referral slam doesn't want to Toria that we were 2010 if you find on the website which go through the whole process in quite detail all these digital stats are in the air even more Madison we have done this here right now but it's why don't you rather go tutorials like you wonder how all the derivation start again okay so are there any questions so I think you're right right see you
|
SLAM_Course_2013
|
SLAMCourse_17_LS_SLAM_with_Landmarks_201314_Cyrill_Stachniss.txt
|
so welcome to the course today we are going to continue today with graph based slam and look into one instance of graph based slam which we haven't um talked about before so so far we basically used um post graphs means the graph that we were considering um only contains robots postes and from um formulation point of view if we are working in the context of landmarks so distinct places in the environment that the robot can perceive um then we it would be actually more natural to also add those landmarks to our graph and perform the optimization including those landmarks so mathematically what we have done so far we actually marginalized out features in the sense that we created kind of local maps and we had this concept of the virtual measurement which relates two poses through the environment or through the measurments that were taken at those locations and if you if you formulate this as a full graph and then kind of eliminate features in the environment you can actually see that or frame that that the post graph is a variant of the original graph where the features have been marginalized out and um so what I would like to look into today is if we don't work based on grits so if you really observe landmarks in the environment these landmarks can be let's say the example we had before the trunks of the trees if we have seen that in the Victoria Park um or can be visual features That You observe for the camera these can be whatever Corners that you may extract from your um sensor observations in the environment where you can estimate for example where those Corners are these are all possible um features and the question is now how can we actually integrate that into this graph based slam framework and we will do that by adding additional nodes to our graph structure or to the graph structure that we have seen so far and simply we won't have the effect that every Noe is a single POS that the robot has been in the past but we will have additional nodes for example four features in the environment so if you looked at what we have done so far we said we will use a graph and um this graph consists of nodes and edges the nodes of the graph corresponds to the position of the robot during the mapping phase so this is how we defined it before so we had only nodes which can be the poses of the robot at different points in time and we had edges and the edges connect always to postes of the robot and this was done either through odometry or through this concept which we call kind of virtual measurement and um what changes now is that we will have different types of nodes and we will probably also get somehow different kind of addes that we need to consider in our framework but the rest stays kind of similar so we again we have that graph and we try to find the configuration of the graph that min minimizes the error that is introduced by those constraints of course if we add new edges or new types of edges we will also have a different error function that we're going to use so this error function strongly depends on the sensor that we are using what kind of error functions I'm using so so far in my framework we had these vertices or the nodes in the graph which were just robot postes so XY and it's heading Theta that was what we had so far and um we had these virtual observations or trans which in the end boil down to Transformations or rigid body Transformations between um nodes between posts the robot has been and today the question is how to deal with landmarks and we will do that by adding new vertices and adding new address to our graph that represent features and then we can also for example optimize through the optimization we also change the location of the features in the environment so if you see this is one of those illustrations so with the vehicle which drives here over a surface and we have those let's say landmarks uh which can be seen at these guys over here and then we are the graph should not only contain the poses of the robot at the different points in time but also the position of those landmarks which are again assumed to be static I had so far the example with the uh with the with the trees with the landmarks so this is illustration from the Victoria Park data set that you have seen already we used that for example or seen that not used that in in um fast slam which was the uh simultaneous localization and mapping problem addressed with a particle filter and we had the robot driving around and we have here estimates of features and those features were generated from uh the truns of the trees and so you see a couple of places where there is no tree like this one over here this can have multiple reasons one of the reasons is maybe there there was something standing as the point in time when the satellite image was taken doesn't Corr respond to the um to the time when the robot was driving or the car here in this case was driving through the environment um on the other hand a couple of them can be wrong observations so for example if you go here these were quite likely to be cars which were simply identified as possible trees but this was just because quite simple detector was used um but there are there will be observations which or line marks or quite likely which are do not correspond to the reality um but what we like to have in the end is a graph which does not consist only of those blue lines which is trajectory which would have been the graph that we have before plus interconnections of course but also links to all those landmarks for the graph will actually grow there will be quite large number of new notes coming into the game so if we kind of want to draw this graph this is what the graph now would look like so again we have those triangles which are the positions of the robot at the different points in time we maintain that and but we have to additionally those Stars which are here landmarks future observations that the robot has seen through its sensors so these edges here are my odometry edges and the robot here says okay from this point I see this feature and this feature and here says I see this feature when coming down he sees this feature and through seeing the same features from different location I can actually relate those two poses with each other so through seeing the same feature we um we can know something about the relative pose or get some information about the relative pose of those two robot poses so if I marginalize out features in this case this would generate a link between these two positions because marginalized Al the feature kind of they see the same part of the environment if you want to and then you generate a direct link and this corresponds to marginalizing out the feature locations from a mathematical point of view that's what we have done so far and today we like to look into what happens or what typically happens if I integrate those um landmarks as individual nodes into my graph structure so it kind of clear from the idea I need are there any questions about what we have done so far about the post graph itself so what are problem that you could Envision that could could appear in this context any idea what maybe not so straightforward yeah yeah the assignment of the um landmarks so that we know it's the landmark that we've seen before or it's not so we don't know yeah so the data association between landmarks is definitely a problem in here on the other hand we actually had the same problem before if we are honest because before we needed to make the decision is this the same part of the environment that the robots observe and through the sensor observation be kind of related to the poses which is actually solving I don't want to say exactly the same problem but more or less the same problem once you maybe do that on the basis of full scans and you may do this on the basis of individual features but the underlying algorithm which make the decision these two places are the same place are very similar so this is not necessarily something that you would say this is very specific to this problem you may need to address this based on your feature representation and the question is how to kind of get good feature descriptors which describe these features this is definitively a point but the data station itself it's slightly different but it's not something which is structurally different in this kind of problem or what kind of what are typical sensors you can imagine to observe those features Las R for example so what do you see if you see the trunk of the tree with the laser range finder distance distance to it and you know the heading to it okay so we know where the trunk is so if you see the trunk from one position and we see the trunk from the second position what do we know we know position relative to in the so so if we we observe the landmark so I see the landmark now with the range and bearing observation and I'm at some further point in time where I don't know where the robot is I see the landmark again can I relate them okay let's simplify the problem a little bit if we have just a camera you just see one feature in our camera image from one position and see um the feature exactly the same feature in a different image so only know the direction in which this feature are Let's see we are kind of in the 2D we're still in the 2D World um what about that do we perfectly know where the robot is exactly so depending on what we observe what kind of features we have and what the properties are that our sensor has we may not be able for example to relate those two poses directly with each other saying hey this pose is whatever one me away from this pose and in this certain orientation because we may only observe for example the orientation or the heading uh where called the bearing to The Landmark so that's what it's called bearing only slam if I can only see bearings so the the the heading towards the landmark and if I see the line marks from two positions be not sufficient to reconstruct where the uh robot was what the relative transformation between the positions of the robot where those observations have been taken so not get an edge between them which is sufficient to in the sense of the degree of Freedom that the system has and what my measurement provides me that is something which can lead to problems and which also needs to be handled in a certain way or at least differently than it was before is kind of one of the the main differences that we have in this type of problems okay so the graph that we have now this kind of graph over here we have now two types of nodes one are the robot postes these are again our um blue triangles over here and we have the features so our Landmark locations which are those Stars over here so we have two types of those um nodes the edges that we typically have are one are again one type of edge for the observation that's a different observation than this virtual observation that we had before um typically the uh the information that you obtain from one of those feature observations um is um less and you would get it for full view of the environment so you need can accumulate multiple features see multiple features at the same point in time then you can again reconstruct the relative uh positions of the robot um but that kind of depends a little bit as I said on the um exact observations that you have and for example if you can only identify the XY relative XY pose of landmark or if you if this Landmark has a special orientation that you can also identify the orientation so if you're trunk of the tree the through the trunk of the tree there's no distinct heading which also makes it um more difficult to actually reconstruct poses and of course we have our odometry me measurements what we don't have anymore are these direct um post to post constraints resulting from observation so our virtual observations that's something which we don't have in here anymore because we basically we only operate based on um our features but again the minimization task is the same one we want to find a configuration of the nodes the difference is now that the noes include the robot's pose as well as the position of the features or landmarks um when to find the configuration of the nodes which minimizes the error introduced by all those constraints so the underlying minimization problem again seems very similar except that we have different types of edges and different types of nodes in our graph or at least partially different ones okay let's look back let's say start with a more for easy example we have a a lark or sensor which tells us the relative XY location of a lark so if I'm looking here I'll look somewhere and I can see this feature say okay this is the Delta X whatever 3 m Delta y 1 M something we can extract so every of those observations of this is aicle and this is a vle feature observations is okay this features let's say um whatever 5 m to the right and 1 m to the front whereas this is 5 m to the front and zero to the side so these assuming for assume for the moment we have these kind of observations what would an error function look like so the positions are now here just 2D points in the space so we just know the XY location of those features so we have robot poses which are threedimensional x y and the heading Theta for the line posst on if two Dimensions X and Y but we need to um take into account here in our um in our setting so how would an error function actually look like one observation so how we can actually model the compute the expected observation and the expected observation then leads us to the error function find difference meurs I of course the expected observation something we had as well in the EKF and we had it exactly in the same way for the for the landmark based slem as well so Computing the expected observation is something you need in these Frameworks so last least squares approaches well in the in the EKF where you need to compute what is the expected observation given my current state which is here what is the current um configuration of My Graph and I compare or I evaluate that by taking into account the expected observation and comparing it to the real observation and by having the mismatch between the real observation and the expected observation I come up with an error function that's very very similar so what do I have I have the pose of the robot and I have the pose of Landmark given my current graph configuration so the only thing I need to do is I kind of water the Delta X and the Delta y from the position the relative position between both of them take into account Al well as the orientation so that's what I kind of need to do and if I write this um I actually write that quite easily so the expected observation of x i XJ x i is here the the robot XJ is the landmark I can express this at the as the rotation Matrix which results from the orientation of the robot and here I have the position of the landmark minus the translation of the robot so where the robot is in the space what this gives me is gives me a two-dimensional Vector of X and Y or relative X and Y where I expect the um Lark to be relative to the robot so that's the way the robot should observe the landmark if the L if the configuration of the graph of the robot posst are exactly the same I expect to measure that it's my expected observation again okay so I've the current configuration of the graph and what I would I expect to measure there is kind of what the what how the robot relative to the landmark in terms of Delta X Delta Y which takes into account where the robot in in the space in the global reference frame where's the landmark in the global reference frame and what's the orientation of the robot so this is the position of the um landmark in the global reference frame this is the part of the the XY part of the robots pose so kind of and this is the orientation so it just say kind of what's the Delta X Delta y in the global reference frame and then rotate it into the orientation of the robots if the robot is rotated whatever 90° to the left you need to rotate that from whatever looking forward to looking sidewards or the other way around so this is the expected observation we perform this operation given a graph configuration we compute what the robot is expected to measure okay how do we bring that into relation with what um the error function will be so what is the error function which would result from that mhm so it's always the expected value minus the real value so it is the expected value minus the value that are actually measured of i j i j right so the it's exactly correct this is the pose of the robot this is my Landmark the error is the difference between both of them so if I measure Delta X Delta y this will be a two dimensional Vector of X and Y and this as well and if I substract then I know by how much in X Direction how much in y direction the observation doesn't agree with the current configuration of the graph so if I just write this down this is straightforward is just r i transposed x j minus t minus that I J and this is something which I directly compute from x i here we have XJ this also comes from x i and this is the expected observation that I know as well so all the terms are given so this is kind of this is the expression that I'm trying to minimize so this is my error function in this case so it's the expected observation minus um the um the observation that are obtained and this is directly formulated like this okay what happens now if my um Landmark is not or what I observe about my Landmark is not a Delta X Delta y but it's just um a heading so we still have a feature is still given by an XY position in the space but I can only observe the heading the orientation of the platform was kind of the the the angular part of the um of the state space in this case how would I do that what is again the um the expected observation so still the thing should have still an X and Y um still have an X and Y position for for a landmark but we only observe heading so the uh the varing to that Landmark so again we have um kind of my expected observation that I J of x i XJ so how do I compute the expected observation for um for this case so given to configurations where the robot where the landmark according to the graph how do I expect what I should measure in terms of orientation so you can see this if you plot that X Y this is the position of the robot this is the position of the landmark maybe the robot looks in this direction how do I compute this guy one part we need for that is the offset Delta X Delta y something which we may exploit if we want to do that computation use and don't know get so Aon Delta y Delta X this gives me this guy over over here Alpha that's my Alpha and then this is the Orient this is the full heading of the robot at the moment I'm interested in this guy so I need to subtract the orientation from that minus the orientation of the robot robots heading this Delta X and Delta Y is just something which directly computed from the X and Y location so this is if if you go to this plot so this is Delta X and Delta y this is the orientation of the robot down here because this is the x-axis so whatever this is orientation I that's the way I called it this Theta is the Theta i j so the what it observed this is the alpha which results from the relative ofet Delta X Delta y so this is Alpha and if I substract Alpha minus the orientation I get exactly this orientation over here so which is negative in this example which is correct because it's turned to the right side this is the heading of the expected observation so given the robot is here looking in this direction given the landmark is here I'm expecting to observe this uh uh whatever here in the plot called beta i j but the expected observation okay is that kind of clear really clear or only kind of clear good okay so we have exactly what we did on the Blackboard so this is exactly this expression here means Delta y so this kind of the POS the positional differ in X and Y of the um Lark and the the robots pose and taking the Y component X component so this is Delta y this is Delta X Aon and so Aon 2 if you implement that for to make sure you have all four uh parts of the reference frame correctly mapped so if you implement that in uh in your pro favorite programming language Aon 2 is a function to use for that and um we subtract from that the robot's orientation or the current orientation of the robot so again if we go from the expected observation to the error function what do we need to do yeah where do and Y the this thing yeah the The Landmark position yeah so what this is this is the expected observation given the current configuration of the graph we want to compute what do we expect to measure and the as the landmark is part of the graph there a two dimensional State vctor X and Y this is given that's what the graph looks like at the moment I'm expecting to measure that this the expected observation it's not the real observation the expected observation the real observation is just the thing that the sensor provides um where did I stop exactly so now I'm I have my expected observation how do I compute the error function same way as before that's a pretty good statement and an Epsilon more in detail observation minus observation so exactly we have exactly this term over here minus the bearing that we actually observe if this is zero that means the orientation fits perfectly so I'm measuring what I'm expected to measure if I'm off by a large degree that means there's a big mismatch and then I'm trying to minimize the difference in the orientations that's what's for bearing only slam so depending on what the properties of your let's actually it looks like then this is exactly the properties that simply results from landmarks and your sensor so if you can only determine the heading you only have one dimensional error function if you obtain let's say two Dimensions Delta X Delta y or whatever um distance and orientation you have kind of you obtained two Dimensions maybe you can estimate also the heading of the landmark so if the landmark has an own heading for example this can be the case if you have special markers or tags that you put somewhere you actually get also an orientation how the things are oriented with respect to the sensor then you may get a three-dimensional um error function depending on the dimensionality that you obtain and the degrees of freedom that are involved in the system you can actually directly reconstruct uh the two recording postes or you cannot at least no unique solution so you you gain some information um or you restrict the solution space but maybe to a onedimensional or two- dimensional solution space okay and this brings us to a second quite important part so if we now proceed with the thing you know for every constraint we have to build our linear system and we have this H and the speed term and if you look to now to this a h j so this was The Matrix that was generated from a single constraint between I and J this was this big Matrix with these four blocks and the question is what is actually the rank of this constraint if we have our um 2D post constraint so this was the first example we measure Delta X and Delta y um so what's the the rank of this Matrix for that you need to know how the Matrix is constructed so these h j was given by Jacobian transposed igj information Matrix Jacobian JJ so this guy may have full rank rank three in this case because a 3X3 Matrix what about the jacobians oh sorry this is uh yeah what what the dimensionality of the jacobians is this two we close it what is it has two Dimensions because Landmark only has X and Y and the last it's just SC so it's a 2x3 Matrix jaob exactly so because um so we have a 2x3 and if you have a 2x3 these is 2x3 matrices and the Matrix built up by the uh Matrix transposed so J transpose J um so the hi J cannot have more than rank two and so this results from the fact that the rank of a transposed a is the same than a transposed same than a so it has the rank of the Jacobian the Jacobian is a 2x3 Matrix so it can have at maximum rank to okay so we only have we have maximum we have a rank two how how that differently from the uh for the bearing bearing only case what is the rank of the Jacobian there or not the rank what is the dimensionality of the Jacobian for the um bearing only case exactly it's 1x3 Matrix so if you have 1x3 Matrix transpose times a 1x3 matrix what's the maximum rank that this Matrix can have one exactly it's one so in case of the the bearing only case we have that AJ has maximum rank one okay these are two important things so here kind of in kind of with one observation kind of one dimension maximum and two Dimensions over there in order to generate a constraint between two poses with XY Theta we need to have three right because the transformation have three parameters okay this gives us to the point consider we have a robot that observes a 2d Landmark with have the Delta X Delta y sensor that's the position of the landmark the robot observes it where can it be so if I if I have a robot which observed this landmarks I know the pose of this Landmark perfectly and so I get a Delta X and Delta Y where can the robot sit in this exactly it sits on a circle around that Landmark so the robot can be somewhere on that Circle depend depending where it sits and depending where it looks you can everywhere on this circle I can generate one p one point on that circle with one specific heading where it generates exactly this Delta X and Delta y measurement what does it tell me about the solution space so what's the dimensionality of this solution space how many parameters do you need to describe this a point in a circle how many parameters do you need to describe where the point lies on a circle one yeah one exactly so this is one D solution space so for example parameterize with an orientation from some point you know where the guy sits okay so the pose of the robot is threedimensional XY Theta we have a constraint which has rank two that means what remains is a 1D solution space so this results this is the result so if we have only one observation we can't identify a unique solution where the robot sits right so we know where the position of the landmark we we know what the robot observes for that Landmark if we don't have any further information in this case we have still we have a 1D solution so we need more observations robot travels or of a second landmark and then we can identify where the robot actually is so consider the robot also observes the second Landmark the second Landmark which sits over here and actually get two circles over there right and then I can intersect those circles and I can actually typically then check with how the orientation match and I can identify where the robot is but I can't do that with a single observation okay so this is a 1D solution space what about the bearing only case oh damn okay uh so okay okay it's gone um so the robot the thing is if you have only measur the orientation to a landmark um that means that you can be anywhere in the XY along the XY Planet as long as the the orientation of the robot is relative to is a special uh is constraint The Heading is constrained with respect to the robot po Landmark course you see let's say 90° to the right see the camera 90° to the right I can move along this line You'll Always perfectly see the camera at 90° I can actually also move around so I can reach any point in the XY space the only thing which is constrained is the orientation of the platform so if I have a camera just observe a single feature I know the orientation of myself with respect to that land mark it can be still anywhere in the XY space but for every point in space my orientation um is fixed so we have a 2d solution space of course we can find every point in the XY plane but then we can't fix the then the orientation is given so we have a two-dimensional solution space which is constrained by the robot's orientation okay so in all these um um examples that we have seen here where we OB observe or where the rank uh where we don't have a full rank of our HJ um we have the problem that um the or we have the problem that the system can be underdetermined that means we may not have collected enough observations in order to find a unique solution to our problem maybe we don't have a unique solution we have solution space and but whatever how many dimensional solution space where we can be if we collect enough information we observe features from different positions and we have enough observations between all poses and all landmarks then it's fine we can solve that but if you just miss that for one single Landmark we just see it only once and we don't know exactly where it is um then we end up having a solution space which is um difficult because um we need to have full full rank Matrix AG in the end in order to find unique solution for our overall um optimization problem and this can lead to the case that our solver in the G Newton approach simply says sorry I can't simply con compute unique solution determinant of your Matrix a um is um zero this is the the problem we had also when we didn't add the reference to the global the the constraint for the global reference frame in the beginning we will end up having the same problem simply can't solve that system with a unique solution um so we need to make sure we have enough observations in order to um to to actually find a solution so we had that before actually mentioned that already before how many 2D observations do we need to resolve a single robot's pose so we have a single pose of a robot and how many land marks do we need to see to know where the robot is if you have a Delta X Delta y observation why three because have have two soltion inter but consider that you have the orientation is constrained typically on that Circle so typically two are fine I mean there are some maybe an Impulse configuration but typically you for example the landmarks are exactly at the same position then it doesn't work but typically two are sufficient because you kind of have rank two constraint you need to identify um you have a three dimensional Three Degree three dimensions of your um for the robots pose so two are typically sufficient what about in the bearing only case exactly we typically need three observations or three landmarks we need to observe which we where we know where the landmarks are in order to add where the robot actually is with respect to that okay so in the pra in so in practice if we Implement that we can't actually guarantee that we for example see every Landmark sufficiently often so there's no guarantee in the problem is if we don't have that we run into problems and um other things you may imagine if you have a robot which doesn't have any odometry information then you also miss the links between the poses of the robot and you need to observe from every pose the substantial number of landmarks that you also see from the next position or to relate the poses with each other then the problem even gets even worse and um the problem is the G Newton approach will actually fail to find to Compu a solution for you if you run to those problems so and one of the um things you can do um this is The soal Labb markwart solution leads to The Labb markwart algorithm instead of solving um H Delta xal minus B I solve a slightly different system and this is this so I replace a matrix H by h plus a term lamb a scalar Lambda times the identity Matrix what is the effect of this operation so I have a matrix H and I add the identity Matrix times a say potentially very small scaling Factor is adding noise no you're not adding noise if you look to the rank of the resulting Matrix of this system that you're solve that you solve rank so we may increase the rank of the system there situations where you may screw up something for special Lambda parameters but what this typically means you generate a system which has full rank so the smaller our parameter Lambda is the closer I'm actually to the G Newton approach and kind of what I can do is I can start with a really small value and just increase the value until I can actually solve that system so this is damping factor which and the goal of this in factors to make the system positive definite and what it actually is if you do this combinations what you what you do when solving the system is actually a weighted sum between G Newton which is H Delta x minus b and a steep steepest decent approach because you have the identity Matrix here you have the Delta X and you have the B and in the B consider what the B is sits the Jacobian it's the first derivative so it's actually a form of steepest descent that you execute so what do you do if you use this system so intuitively first you say okay I integrate integrate this damping factor that makes sure that my P my system is um positive definite but what you are in practice doing is actually making a weighted sum between a g Newton and as steepest descent so you can see that as um if you're actually pretty close to you the solution that you have in practice um you're quite likely to be dominated by the G solution um if you're far away sometimes the steepest descent has a better impact on that so what you need to decide is actually what's how do I actually set this Lambda and so what what should that be and the living back Mark algorithm so there's a simplified version of that simply adjust this parameter um over time during the search so you start we have your initial gas and while the system is not converged you start with one initialization parameter of of of life so you set that and then you build your linear system you compute the error of your linear or the current error configuration of your of your graph um of the current graph configuration you make a backup of your old pose and then you solve this system over here with your initial parameter Lambda and then you update the pose given the solution so you add the Delta X and then you check did how did the error change so the error of this new configuration is the error bigger than the one of the old configuration if the error is bigger I might have screwed up something maybe my Lambda Factor was too huge or something else went wrong to say okay I just kind of restore my back I do a restore operation go back to the previous solution and increase my Lambda by a constant Factor let's say two so I increase the Lambda so I'm more towards a gradient Cent operation that I want to do or if this is not the case if the aror decrease I say okay let's try to make my Lambda a little bit smaller C if a little bit smaller I'm more towards or quite likely more towards my solution and end up in a configuration that I can uh solve with with SC Newton but if it's too small again I will screw up and then you uh if you have your s that then you're in this situation again then you increase it again until the system converged so a simplified version there are different strategy how to set those parameters but this is kind of one of the standard techniques um that are used to Sol solve those systems one of the standard Solutions actually something which um you may have heard already this is the the problem of bundle adjustment this is a special instance of what I've talked here and what bundle adjustment typically refers to is that your um sensor is a camera and You observe only adding information to landmarks and you want to do a full 3d Recon struction of the scene so it's not only in 2D typically refers to 3D and um so what you want to do you want to estimate the position of the cameras at every point in time and you want to estimate the location XY is that location of the features in the space and you do it exactly in the way that we have done that here so you use the living back markad algorithm or it's one of the say standard Sol solvers for this bundle adjustment problems and what is typically minimized is the So-Cal reprojection error that means where do I expect to see my feature which pixel in my on my camera image and where do I observe it and I take the ukian distance between um the pixel where I see my feature where I expect to see it and I want to minimize this error this was called the reprojection error so if you if you reproject where you think the landmark should be in your image how well does this match with the real observation and this is a special instance um of this problem which is um has been actually developed in photogrametry in the 1950s so actually quite a a while ago when we were not probably thinking about robotics or about the slam problem in robotics it was definitively earlier and um also it's very frequently used in computer vision for also doing reconstruction so these are actually um very very very similar problems in robotics we sometimes can exploit um whatever some additional information so we may have odometry what Compu Vision people often do not consider to have they just take images or a collection of images or they may um so we can exploit that or the thing is that we typically use the same camera to do that because it's one camera which is mounted on the robot if you do that in the computer vision context you may want to do that from whatever consumer camera a lot of people uploading images on Flicker and you actually want to do a 3D reconstruction so you need to estimate the camera parameters for every camera separately um so there are certain things um or certain aspects of the problem that can can be different if you look into the individual fields or they make different assumptions but the overall problem which is addressed you see a very very similar between what happens or what happened in photogrametry what happens in computer vision and here in robotics so there's actually a very very close link between those fields okay so what I was talking about here today is graph based slime for landmarks I talked about a kind of slightly simplified version here where everything is in 2D we only have one um orientational component which make things a little bit different if you go to the three-dimensional space or XYZ with three angles you P rol you may run into some things maybe a little bit more tricky to implement here um but in the end it boils down to more or less what I present here building on top what we have learned in the uh since the beginning of the year for the square slam the rank of this H Matrix matters so you need to make sure or the the the the closer you are to a positive definite Matrix AG the easier you can actually solve your system and the Liv back markot approach was one way to actually solve that problem um if you don't have full rank of AG by adding this damping Factor times uh the identity Matrix and this gives you a weighted sum between steepest gradient and um G newon approach um so there are also very special instances on how to solve this problem and how to this Landmark based slam and just one work which I would like to mention this is a uh an over view which actually is already also a couple of years old now but which gives an overview about this or bundle in an overview about the different bundle adjustment methods which also have been used in photogrametry years ago and which are used in computer vision in order to do this um full free reconstruction based on image data okay that's it from my side for the first hour and we are going to
|
SLAM_Course_2013
|
SLAM_Course_08_Sparse_Extended_Information_Filter_Part_1_201314_Cyrill_Stachniss.txt
|
okay so welcome everyone today we today we'll talk about kind of the last variant of the Kalman filter or which belongs to the family of the common Twitter framework which is the sparse extended information filter so last week we actually discussed the extended information filter which is one technique which basically does extended Kalman filtering but not in moment form that means using mean and variance matrix but which uses the information vector and the information matrix to do the filtering and you can show that the both filters have actually the same expressiveness that means they estimate the same state or can estimate the same distribution a Gaussian distribution let's go better put it that way and so the the errors that we do in the approximation errors that we do in the EKF we do them in exactly the same way also in the extended information filter the difference between the extended information filter and the extended Kalman filter was that different steps have different computational costs inverting a matrix of obviously is the same but conditioning and marginalization are two things where one is easier information form and the other one is easy in moment form and depending on what we are doing one operation can be more efficient than the other but in the end if we put everything together it turns out that for the common filter the prediction step is computationally cheaper and the measurement step computationally more expensive and for the extended information filter is the other way around so the question is what do I need to do or there any big gain from using one the one or the other form and it turns out that they are both kind of similar there may be applications where the extended information filters a little bit advantageous and their techniques where the external Kalman filter may be advantageous but there's kind of there's no really clear winner what we're going to do today is to discuss an extension of the extend information filter which is the sparse extended information filter so we're talking here about what happens if the matrices that are involved in the computations are sparse matrices and this turns out to be highly advantageous if we operate in information form and so we will discuss today and actually next week because I didn't manage to squeeze everything within 90 minutes I tried so whether then I kind of revoke there and split that up into two parts so today we will discuss the prediction step and the update step of the sparse extended information filter already tailored to slam to have at least some more concrete examples and then next week to specification mean update steps which are two additional steps which are needed so today I would like to give you the inside what makes the sparse extended information filter a good idea and we will discuss the two main steps it's a little bit involves a little bit more math today so they are quite some derivations I try to not go over them very quickly but to give you reading inside what happens here and the messes everything is on the slides and and I hope I prevent them in a way that you can actually understand every single operation that we do and so if there is anything unclear just in the Rockman say I don't I don't understand this step from whatever line 18 to line 20 what's going on here and I will try to explain it to you as good as I can so the goal today is even if you don't keep in mind all the individual steps that the sparse extended information filter does which I think you won't at least you should know what are the kind of the key steps toward the goal and if I would need to do it myself I would be able to do it what did you try to derive it myself that's kind of my goal today of course they have some tricks involved and without someone telling the trick you won't be able to do that but let's say if give me a CFA kind of or a sheet of kind of tricks or some rule that you will apply you should be able to actually derive the steps yourself so as a short reminder what we had was in the extended information filter we used the canonical representation of the information form for doing all the operations and I will try to do here the blackboard is at least note down a few thing that you probably to keep in mind today so that you have it always at head so the covariance matrix that we have the inverse of the information matrix and the mean can be computed by the inverse of the information matrix times the information vector so let's go to the first part how do the moments look like and the same we can do the other way around so we have the information matrix is obviously the inverse of the covariance matrix and we have there the information vector is the inverse of the covariance matrix times the mean and that was the inverse of the covariance matrix times the mean these are just two things I noted down here so whenever you forgot about it you can actually look it up on the blackboard here and we might need that during our operations so we all know they are two ways to represent a Gaussian distribution want to see standard norm or standard form moment form and the other one is the information form or canonical representation a parameterization and they are equivalent in the sense of what they can compute they both represent a Gaussian distribution okay now let's start with the sparse exterior information filter so and to motivate the approach why it makes sense to use a variant of the extended information filter which is a sparse extended information filter which is an approximation of the external information filter so it's not an exact operation so the result we get will be worse than the extended information filter or the EKF but we as we can show it has some very nice properties from the computational point of view namely it will turn out that we can do the involved operations here as constant time operations which is something which is a dramatic increase in computational resources okay if you look to a standard slam problem so we have a map of landmarks and we have a robot moving around in the environment as you can see here on the left so what you see here is they're already sitting over there you can see the trajectory that the robot oak all through a place and it observed landmarks these are landmarks here and there kind of this blackish blurry areas that should kind of represent that this is a Gaussian distribution so you have roughly spherical variances for the individual and mark so you have a pretty good idea where they are located and if you look into our covariance matrix we actually get this dense covariance matrix and it has this checkerboard pattern that means all the X locations and all the while occasions of the landmark are strongly correlated and but there's no little correlation between x and y of the individual networks so if we fix the exposition of one landmark we basically know the exposition of all other landmarks quite accurately that's what this pattern here reveals the problem is if I have a matrix like this and I need to invert such a matrix this is a quadratic number of elements so this is a costly operation to do this inversion if I however look to the information matrix so this is normalized information matrix that looks not net dense so this kind of the inverse of this guy this guy however is pretty dense so these values which are here shown in white they are not exactly 0 they're just small elements it's also kind of densely filled but at least if I really look to that I should see that there are elements which are much much stronger than other elements so if I can zoom in and see this pattern over here I see for example elements like this the post probe which sits over here and there's a sound strong correlation with some element down here first of these or there's a strong link between them but the important thing is all other elements here are also nonzero but they are very small so they're close to 0 but they are nonzero and now I can ask myself if I have this matrix let's say from just to motivate the work may I put everything which is smaller than epsilon to 0 and then I have an sparse matrix and then I can do my operations more efficient so the sparse excited information filter is somewhat more involved than this just set everything to 0 which is really small so it's not that easy but and will allow us to build up an information matrix which is sparse and which will maintain sparse doing all the computations and this then is a great advantage for all the so the first thing I would like to do is kind of get a little bit more interpretation into this information matrix so what does information in this information matrix the nice thing is we can actually nicely visualize that if we say ok what we have in here the first three coordinates for three dimensions are the position represent the position of the robot and then the other elements are as they represent the position of the leg marks yeah so we have the robot over here and we have these these elements here should be turbulent marks and they're something which is called an active landmark these are kind of the black things here and we have so called passive landmarks the active landmarks are typically those landmarks that the robot is crude either currently observing or where there's currently a link between the robots pose and the and the inter landmark pose and the things that can actually interpret this matrix as a graph so and the every element which is nonzero means that there's okay let's start from the beginning so they can represent the graph in the sense that we have notes and these nodes represent the random variables or the individual dimensions of my problem so I Evernote for every dimension I have in here or I can actually group them so I've better interpretation them like the pose of the robot is one node and every landmark can be seen as one node and then whenever I looked at this information matrix I can see the information matrix as the matrix which tells me which nodes are connected with an edge so if there is a nonzero element in this matrix that means there is a an edge connecting those two notes and this edge imposes a constraint or link between them so if the robots pose is here and it makes observation this kind of scan see that is a link which is created to the lent market observe and then there's a constraint which results from this from this observation and kind of the darker the the the off-diagonal element in here's the stronger the link at the more information sits in this link the more I know yes please yes exactly so you can see this as a fully connected graph well but let's say the light looks that you are very very far away you haven't seen been seen at the same time and where the robot took a long time to travel from A to B they will have a very very weak link but that's absolutely right if I take the exact information matrix and I would build up the graph like that every node or basically Morris every node would be connected with every other node so there would be no win in this sense but whatever another talk here is what kind of how do I interpret this information matrix but you're absolutely right and what you're saying and you can actually show that what this okay and the idea that we are trying to do is that we kind of say we only want to maintain this information matrix kind of in the correct way for small number of features for once I see right now well there's a strong link between the robots pose and those features and these will end up to be these active features and then there are others which are further away which I haven't seen for a while whatever way I say the the correlation between the current pose of the robot and those landmarks is rather small and I'm going to ignore them that's kind of the approximation that we are or I can try to get rid of those edges that's what something we are trying to do later on and then this gives us a sparse graph but if they interpret it if the matrix is fully populated with elements or well they are small it will result in a fully connected graph and for those of you who attended the AI course and may not know something about graphical models you can actually show an equivalence between this information matrix and a Gaussian Markov random field so I've every node which represents you to whatever the light mark or the robots pulse is a random variable and you have those constraints between them which correspond to the potential so you have them in the Gaussian Markov one an Markov random field in general but here Gaussian Markov random field because we're living in a Gaussian both I'm not going to elaborate that in much more detail but for those of you within of the AI course know something about a Markov random Fields at least think they can see a connection between them and if I have two nodes which are not connected yr nach so there's no edge between them in the graph that means that they are conditionally independent of each other given all the other nodes there's something you may know from different graphical models that you have seen also Bayes networks the direct links but you can also model similar things with that so whenever there's no edge between two nodes it means they are conditionally independent given all the rest so given I know all other nodes these two are conditionally independent if I don't know all the others they are not independent of each other just because if I if you draw a simple very very simple graphical model so you have your notes let's say it looks like this solution note one two three so if you what this means is if you know note number two one and three are conditionally independent of each other but if you don't have any knowledge about two then the knowledge about one will also tell you something about three so but given you know two they are independent of each other this is the conditional independence and this is exactly what how we can interpret two nodes which have zero elements in there in the the I've I stays row column so we have no time no J and the element IJ is 0 that means they're conditionally independent of each other given we know all the rest it's kind of the the direct interpretation of that and the individual element so how large this values in the information matrix tells me how much information do I have how much do I know about this constraints you can see that the larger the values of stronger is constrained okay so what you observe typically is if you have length marks which have been observed together and the mobile is moving around so the closer they are with together typically the higher is are these these elements connected this is what doesn't hold in general but and it's kind of not it doesn't hold in all situations but in most situations it's something in observation you may do of course it depends on your type of sensor or it depends how your model your problem and how much uncertainty you add if you if you move so if you have a high motion uncertainty these Corrections it's typically smaller because the correction come up because you if you're moving from time step t-wayne model the current pose of the robot in the position of all landmarks and the robot moves on you kind of add a node XT plus 1 the current pose of the robot what you mathematically do you marginalize or marginalize out the old old one and this gives you all these connections and therefore you get kind of the robots poles and landmarks get they click they're kind of like they say start to get dependent on user other other so they're not conditionally independent anymore exactly what we have seen so far in our current problem in our slam problem is that most of diagonal elements in this information matrix are close to zero but they are not zero they're small and as I said before what we are trying to do is to focus only let's say on the important ones and try to ignore the others or get rid of the others and in this way maintain a sparse matrix so we have only a constant number of elements in this matrix in the off diagonal elements that are nonzero so as I said I just said it here set most links to zero or water and water fill in it's not really setting them to zero but it's kind of for the moment you can think about setting them to zero but we will do it somewhat smarter than just setting them to zero and if we have a sparse information matrix this can actually dramatically simplify a lot of operations that we do because we have only a small number of elements or constant number of elements in my matrix for which I need to do an operation and if I have a constant number of non-zero off-diagonal elements and I can do all my computations dramatically more efficient that's kind of the key trick of this parse extended information filter is get rid of a lot fraction of the direct links focus on the most important ones and avoid a fill-in so what let the matrix gets dense so kind of add only a small number of notes and make sure the matrix doesn't fill up over time it's kind of the key ingredients of the sparse extended information filter and this is somewhat specific to the slam problem at least the way which is presented here because we have the assumption that we have an update step where the robot evolves and the the the number of dimensions which evolved over which which is often the motion update is small the rest of the nodes are kind of static the the robot moves it doesn't affect the length marks there are some assumptions in there and this dream efficiency of a constant time operation it's only achieved in this context otherwise I get a little bit more costly but so it's kind of a little bit specific to the slam although depending on the problem you have you may come up with sparse variants of other problems as well okay so it's kind of the very rough general overview clear at the moment or should i elaborate something a bit more we'll go through all the details but if there's something which is unclear at the moment okay perfect okay let's look a little bit into the individual operations that we do the first thing I'd like to start with the measurement update step what you see here on the left hand side is the information matrix of our problem we see on the right hand side is kind of the current scene that we have so that's the robot and these are three landmarks M 1 2 & 3 and this is before any observation had been considered so there's some initial uncertainty that we have about the robot spouse this one here by this gray area so we have some information about where the robot is let's say I mean initialization where we have a certain and so did we start with and this is this is what the matrix looks like before any observation has been made that clear now let's say the robot observes link mark m1 here for Bob service mint mark m1 we will know something about m1 and we say there is a direct link between the robots pose and m1 so if I kind of given I know everything else and I fix the the I know X G I can tell you something about m1 so they kind of ignore for the moment m2 and m3 if I know the pose of the robot given I have seen the landmark I know something more about the line work so there's a direct link between them and this leads to this kind of fill in here of this matrix so without moving the robot takes a second observation it observes now let mark M - how would that matrix fill up what do you think where which blocks would be gray here if I see like Markham - so this three by three matrix that is not the case exactly so these two elements will say white it's true so we see the light mark M 2 so which tells me something about what so if I if I have everything else fixed the knowledge about the the robots pose will tell me something about with landmark m2 and now let's check to the to the other possibility that you indicated what would happen if I would know I just want to know if m1 and m2 are conditionally dependent of each other given I know everything else so given I know everything else means I know where the robot is I've perfect knowledge about that not perfect I know where it is and in this case some would get kind of rid of this guy over here I have only this block over here if I know where the robot is knowing something about m1 doesn't help me to estimate m2 because I know the robot suppose so under the assumption that I know the robots post and in one of them two are conditionally independent of each other that's different if I don't know the robots post if I don't know the robots pose and I know where m 1 is I can do a better estimate of m2 but under the assumption that I know the robots pose they're independent of each other so there's no direct link between them here so it's kind of clear why that is the case ok perfect ok that's how it looks like so the summarize summarize that if we integrate an observation we add additional elements of this information matrix and these are elements between the robots pose and the landmark we observed there we add something to the off diagonal elements and of course we gain knowledge about the this may gain knowledge about the library itself okay so we have those elements which tell me what is the you should have something about the links I edit okay so we now did our observe integrator observation now it's time for a motion update so the robot needs to move what does it mean the robot needs to move the robot moves let's say from time step XT where time step T where the robot X T it moves further on let's say ends up going to be over here and this new new post XT minus 1 so the pose is updated and the previous pose is not taken to count anymore mathematically it's marginalized out if I kind of we can see that as creating a new node over here and then getting rid of this node over here that means that all neighbors of this node over here will become connected so if I move over here XC minus one I get rid of X T these guys get connected the reason for this is kind of by moving from XT to XT plus one I add additional uncertainty into the variable which represents the current robots pose but this uncertainty is kind of not added to the let maga observe before and therefore those landmarks become correlated with each other because I eliminated this pose XT minus one so the old pose this is so that that I get new elements in here there's something which results from the fact that I have annoyed emotion and I only model the current pose of the robot I ignore all previous poses and therefore this kind of I get this fill-in I was it's called a feeling of this matrix so they're elements which which get get values in there and this is results from the marginalization step so you marginalize out a variable and this needs to a connection of all neighbors or all elements which were connected with this node ok so if you see that so kind of you can see some of the information some of the information from this correlations another moved here so that we have an addition so we have the wings get get weaker between the current pose and the landmarks because the robot moved on it got one certain about its new pose so the links to the previous posts get weaker they aware there's a little bit smaller but the the links between the two landmarks get actually stronger what it's added because if it wasn't there before in this example because s and this one because the robot moved and edit uncertainties to its state knowing something about landmark m1 will tell me something about landmark m2 that's kind of the reason and this happens in this form so if you look to the motion update the motion update weakens the lanes between the robots current pose and the landmarks and it adds links between the Lightworks and therefore this matrix gets fill up with new elements and that's a problem for this bars extended information filter because we will lose sparsity as we continue to map this matrix will fill up Phillip Phillip Phillip thanks this values may be small adjourn here but they are nonzero this is exactly if you do these steps as you have seen here that's exactly which leads to the extended information filter so your matrix will fill up your information matrix will fill up those values and therefore the sparse extended information filter thus the so-called sport suffocation step this fortification set is an approximation that's the important thing to know to you there's not an exact operation you you lose something if you do that and then you lose accuracy your estimate or the golf distribution will be worth if you do this step but you gain computational efficiency and that's the reason why people do that okay so this is before sparse vacation so and we have the kind of this guy over here this link over here corresponds to the value which is stored in here we can say okay m1 is like Marcy our area observed some time ago the robot continues traveling we say now is let's now ignore after kind of the robot traveled a little bit let's ignore this let me get rid of this link simply throw it away so this leads to the fact if I throw that away that we have zero elements here and this off diagonal which which so there's no link between the robot post and one anymore so you got rid of this link and you can see the effect of the sports vacations means it some of the information here and propagated here so this link and this link gets stronger of course again if you if you if you make this assumption that given you so the assumption is now given I know M to the new part of the robot and the old landmark are conditionally independent again this is an assumption this is not an exact operation I lose information I throw something away but this approximation helps me to make my problem to solve my problem in a much more efficient manner that's kind of the the overall goal in here okay so what the specification does it kind of means ignoring links getting losing something by assuming it conditionally independent independence so transforms a post here in a certain way so some of the information through the specification move somewhere else but it assumes a conditional in it the resulting belief has a conditional independence and between the variables which I kind of removed the direct link so this is a made this is how the matrix will look like after the specification step if you know the number of elements which are currently in here which are nonzero is the number of elements ahead after the update step next motion update step generates a matrix was the same number of widened and dark elements okay here is kind of these are links between the robots posts and features so I kind of widen out the direct links between the robot suppose in features okay so and that's kind of the key idea of the three main steps of this our sixth information filter which was the the emotion step the measurement step and the versification step which is the additional step which which is which has to be considered in order to have a sparse matrix and maintain sparse matrix and this leads us to the concept or the so how do I realize this person and the sparse extent information filter does it by using so-called active and passive networks and so is this kind of a central element in this sparse extended information filter how to realize this and the active line marks a subset of all the available networks and it includes the currently observed landmarks the networks are currently observe are in my active set there may be some more so let's say those which are observed from the previous time stamp but the assumption is that this set of active landmarks is small and it's constant it doesn't grow in size so I have just have let's say I have a million landmarks in my environment and say just ten are my active ones and that means I only maintain direct links between the robots both in these 10 networks assuming I say I don't see more than 10 line works at the same time I mean it is an approximation in order to implement that efficiently you you need to set that to be set back to a constant size you may be able to adapt that over time but to be the the gain by that is extremely small and you can typically make an assumption I don't see more than whatever n landmarks in the same observation nice way that again it's in something anyway lose something so it's not an exact operation so you can make there's another assumption that you'd say he won't let say observe more than 10 landmark so whatever your current setup is in the passive landmarks are simply all other landmarks and again the key trick is P approximation that I introduced is I only want to have direct links between the robots pose and active landmarks and I don't want to have links between the current robot pose and the passive networks okay so if I go back to my example which I here so this is an active landmark this is a passive landmark and this isn't landmark which was active in the previous step and gets passive now of course I eliminated this link in the sparse ofin step they're kind of three things active light marks passive landmarks and those which have been active and now turn to passive networks yes please yeah so what you typically do is you you'd say I'll say I have ten landmarks I can maintain so you would maintain the last ten landmarks and only the eleventh landmark is observed you throw up and then back at the first landmark so in this case I just only drawn three landmarks you could have more in here and then the active set would be could be bigger so it looks like yeah you can see that as a window in the sense of the active landmarks are those which I currently consider and ignore the direct links to all others it's yeah pretty soon okay so maybe there's a misunderstanding so the the active set is not a set of let max I currently do see all I see should be in the Erica that can be more in there so you are keep typically in the informations implementations typically do they keep a constant number of active landmarks and equals ten is my number my personal number mother people may use other numbers and that's the number of active landmarks you maintain if you observe three they are three that you quality three and seven which you haven't seen at the moment you still keep the direct things because there's kind of the number of direct links you want to maintain and obviously the bigger this set is the closer you are to the extent information filter so if you make all light reflective light parts you have your standard whatever yeah a solution or super to give the same result than the EKF solution but if you make your number they are said smaller you will do a bigger approximation so it's actually something you can trigger depending on how much computational resources you have a way the little you can say okay I let's say I have a really fast computer now so I can send a set end to 30 or you have a really crappy machine so and should be more 5 or something like this so that kind of that's actually parameter that you can set they're smaller the number of reactive landmarks the faster but the stronger the approximation that you do so the mean the it was negative impact on your result any further question at that point so we were talking about exactly active and passive landmarks and the key idea is that the or of the sparse occation is that the sparse extended information filter performs the specification at every point every step so after every motion update and measurement update it always creates these carries out this specification step if the result the I only have direct links that I said before between the robots post and the active landmarks and I don't have links between the robots post and all other landmarks and also this strategy that I only have a small subset of active landmarks result in the fact that the landmarks have also a limited connection among each other and the reason for that is is when do I create links between landmarks I create links between landmarks whenever the robot moves I create a link between landmarks which are active at the same time so if I have limit my active set I will also limit number of links that I create among the landmarks in this active set there are only 10 landmarks in the active set they can only in this step these two landmarks can can be connected with each other and there's that's point no connections with nitrox essay which are further away so I have direct links between let marks have been observed at the same time well I'd be Noelle observe chat be in the active set at the same time okay so to summarize it we have three main steps of the sparse extended information filter motion update measurement update specification step if I say that to you I'm lying because there's actually one so so far what you know these are the three steps what we are unfortunately the fourth step and any idea why we need a fourth step it's very similar to the extended information filter what was kind of one as a small disadvantages or structurally not so nice things about the extended information filter was it sufficient to maintain the information matrix and the information backdoor or was there something else I had to maintain well they are third quality that I needed in the sport in the extended information filter besides the information matrix and the information back door no not the common game so think about the nonlinear functions involved what the parameter the parameters of those nonlinear functions so what's the input of the nonlinear functions in the Kalman filter like send it coming for them it's like G in function h g was the motion updated h was the observation function what was the input what parameters you need to put into those functions to get the result the current state and maybe a control where's the current state what's the current best estimate on the state weird that in my extent information filter yes exactly the mean is missing of course I can reconstruct the mean as was written here by inverting the information matrix and multiplying it with the information vector but the problem is this operation was potentially very costly and therefore what we said we maintain the estimate of the mean additionally in order to put them into our nonlinear functions and that's the same thing that we need to do here so what we need to do we have to put a new second step in here which is an update of the state estimate of the mean so we need to maintain the mean as well in an in an efficient way that's a problem we need to do in efficient way and we need that to compute the expected measurement and also for the motion update we need to know something about we need to know the mean otherwise we can't predict where the system will end up with and of course we could compute it in theory in this way but this is an inefficient way of obtaining that we would ruin our constant time approximation there we need we need to add this additional step okay so these are the four steps of the sparse extended information filter I would like to like to talk today about the first step and third step should kind of one is tricky first one is little bit ugly and the measurement update is pretty easy and it will probably take another hour or so to do that well there were less than an hour but approximatively and then the specification step and the update of the state estimate is something we're going to do next week so go not take full 90-minutes next week but I try to squeeze everything into ninety minutes last night and possibly impossible to do that without really eliminating essential steps that you need to know in order to understand it the experts extended information filter so I said okay I'll keep it with kind of the only lecture here which is split up into two parts in this in this course but I was unable to do it better it's having the feeling that you still learn what's going on okay so let's look into the four steps of our sparse external information filter so if you do side slam again we have our four steps as an input is the information vector at the previous from the previous point in time the information matrix of the previous point in time the mean estimate of the previous point in time our current control our current observation that's the input and what the output will be it is the new information vector the new information matrix and the new mean okay so let's kind of our goal and so we maintain these three elements it's not only information vector and information matrix we additionally keep the mean because we need it for our non-linear functions so let's start with the step number one which is the ocean update again this is the kind of most involved operation so don't shut off during these steps we will derive a number of quantities that we need then we'll put those quantities together so a couple of new variables will be introduced in the next 30 minutes or so this is not that all these variables are have a deep meaning it's more the case that these elements we can compute and then we can put them all together and get an easy equation so some of the definitions which will come up very soon may look a little bit odd but you will see later hopefully later on you will see that you kind of compute five quantities and then you put them together and you get your result that's kind of the easiest way to actually write that down any questions yes the UT is the control commands of the odometry command or the motion command that I gave to the robot so it's like go a media forward its motion of the robot goes a meter forward that's encoded in this UT and this that he was the observation that are obtained like the observation of that landmark so this landmark which is a camera a tripod at the moment is three meters away from me 90 degree to the left this is the information which is in that T so they the motion command which was executed by the robot and the observation that the robot obtained both at the current time set so it's kind of the information I use to update to go from my old belief to my new belief from the belief at time t minus 1 to the belief at time T I need to know which motion that the robot carry out approximatively and what it observe approximatively under noise ok so before we start with the motion update there's one thing that you may not know by heart you should have seen that if you have taken linear algebra course and this is the so-called matrix inversion lemma which tells you how do if you have a matrix of this form where R and Q are quadratic matrices and you want to invert this term you can split this up into this long equation over here this is something we will use is the so-called matrix inversion member and it can be extremely helpful that's why it's you stuff if I know already something about this matrix I want invert so if I for example I know already are the R inverse so let's say I want to invert this matrix and I'm not already the inverse of are the only thing if I look to all the individual steps which are written down here so are the inverted is already known the only thing I need to only matrix n is still need to invert the matrix cube so and if I know our inverse already I only need to compute Q inverse and depending on the dimensionality of this matrix q that can be really advantageous so if Q is a low dimensional matrix and these are functions of rubbage method to a high dimensional space similar to these ft functions we used in the common filter and we know R to the power of minus 1 already this can be computed very efficiently and this actually one of the reasons why we will use this in the sparse extended information filter because we don't know this matrix in our example that we see later on but we can compute this efficiently and if we can compute this efficiently the only cost operation is inverting Q and if Q is low dimensional that's very very effective way for inverting such a matrix again I'm not going to derive that that's something you find in every linear algebra course or most linear algebra courses you should find that there's something just a lemma that we are going to exploit here in that course is it kind of yes please yes but that's true we said Q is a low dimensional matrix and so this is a low dimensional matrix so here P and Q this is a transport load in maps this high dimensional matrix to a low dimensional space and you only invert in the lower dimensional space so this only makes sense if Q is matrix with a low dimensionality if Q has the same dimensionality then your matrix are typically downwind anything about this operation unless you have explored several certain structures which exists but the main gain is that Q's or dimensional and you know already something about our inverse either you have approximately computed it you know it's sparse and you then can compute the inverse fast or you know the inverse from another computation something like this then this is extremely helpful okay it was just kind of having an attack logging into a prediction step how does a side slam prediction step looks like as I said our goal is to compute the predicted information vector the predicted information matrix and the predicted mean out of my previous belief or previous all that you actually t-minus one here I'm sorry that should be t minus one in this line and and from them the motion command from UT so well it can be the predicted belief and as we know before this update step is very costly in the extent information filter so we're going to do it better by exploiting sparsity by saying okay my information matrix this this guy here is sparse because it's the output of the previous step of this parse extended information filter and my assumption is this matrix is sparse and I want to show that if this matrix is sparse I can actually compute these quantities here in a very efficient manner given the assumptions of my active and hazard landmarks then I use active and passive networks so I have active passive landmarks and I know that my the information matrix from the previous point in time is sparse then my goal is that I can compute all those elements very efficiently especially can do that constant time ok so let's start from EKF so we could EKF slam so it's just a copy-paste of the first part of the algorithm just as a reminder we had this matrix F which maps me which is only once in the dimension that corresponds to the current pose of the robot and then a large number of zeros corresponding to the number of landmarks times the dimensionality and this is just used for for mapping my small 3x3 matrix to this to this high dimensional space I'm operating in ok if I do that I can express the predicted the the predictor step in the EKF KS by taking the old name plus the this mapping matrix and plus my non-linear function so this is kind of the element which tells me how do I update my the the current pose of the robot with my nonlinear function and then I can also compute the the Jacobian which was the identity matrix plus again this mapping of whatever these these small matrix over here to this high dimensional space add them to the identity so I have the identity and two only two of diagonal elements which occur which are these two guys are non are are nonzero this was the result from the first derivative so it's Jacobian which I compute or the part from the part these are the partial derivatives of my nonlinear function okay so this is all EKF and then the first step in de KF was I have my jacobians times my previous estimate of my covariance matrix plus the noise term again here the noise term ended by three by three matrix mapped to the high dimensional space so that this is the noise term which is everywhere zero for all landmarks and just only for the robot suppose it's non zero that's what I have and I'm going to start from this operation to derive the sparse external information filter step the first thing I do is do a copy and paste of these three three elements basically a copy and paste of the ideas and then I start with this term over here and say okay the first thing I want to compute is my information matrix the information matrix is the inverse of the covariance matrix down here so I in order to compute Omega bar T I take this expression here invert it and I'll see where this takes me okay from the notation point of view I again have my so I arrived my size algorithm I have again my my ethics it was exactly the same function I had before so this was my copy paste operation of this guy and I have my my vector Delta which is exactly this this vector over here so if I take the new mean if the old mean + ft times my Delta Delta and large Delta is exactly this expression over here these are my three terms over here and now I want to use this to come up to compute the in the predictive information matrix at time T by inverting this expression over here that's exactly what I do I say ok my information matrix is the inverse of my covariance matrix just by definition and just by definition I explained it in this way and I invert it right so this is just copy paste then I say okay let's define a new quantity Phi over here which is corresponds to these three elements the three elements give me five if I define a vector matrix Phi which are five to the power of minus one sorry for these guys that's how my information matrix is computed okay so if I can compute this expression the overall expression including this inverse efficiently I can obtain my information matrix efficiently right so if I defined Phi to the power of minus 1 to be these terms then Phi is these terms inverted and I can express it in this way so that's just if I define the information Patricks in this way this is the definition of Phi okay so let's see if we can compute this term over here in an efficient manner so we have 5 to the minus power of - well Phi inverse plus RT this is a noise term inverted sauce what exactly was written on the previous slide I can say okay this matrix RT was the noise term of the motion update which is a low dimensional matrix 3x3 mapped to a high dimensional space that's it so I can write it in exactly this way which pattern do you see here you see something we discussed before that we could exploit now so we want to compute it inverse of one matrix it's going to be expressed by a matrix plus F R this is low dimensional and exactly so this is exactly the pattern I've seen for the MIT which I can use in the matrix inversion lemma so the next step is now just apply the matrix inversion lemma and you get exactly this term it starts to get a little bit ugly but we don't care really about the individual terms the important thing is here that this guy in here is a three by three matrix good this is a three by three matrix and this maps just takes out the elements of the a small part of this matrix Phi maps the total oil event of space and invert this matrix in the end so it's inverse of a three by three matrix that doesn't hurt it's constant in the number of features or landmarks I have at all so that's perfect for me next thing I do is I know that this two elements they're all the Earth's except the three by three block which is the identity matrix okay so that's it's all good so we'll take a dense three by three matrix maps it to a higher dimensional matrix which is everywhere zero except this three by three block which may be populated with values right is it clear so this expression over here is a large matrix which is zero everywhere except in the upper 3x3 block because this matrix over here which is the 3x3 matrix which may be densely populated I don't know this is map to this the high dimensional space for the ten 3x3 block and the rest is all zeros okay that's good and the only thing I need to do is I multiply this expression with my matrix 5 so if and only if this matrix Phi is a sparse matrix that means it is basically as diagonal values and it's everywhere zero except of a few a constant number of nonzero elements then I can actually could do all this computation in a constant time operation because this operation just the 3x3 block the rest is all 0 will only be multiplied with a constant number of elements in this matrix Phi so basically everything is here except the constant number of elements and a multiplied was a small number there's a small matrix only of a constant number of operations I need to carry out independently how large this matrix Phi actually is if there's only a constant number of nonzero elements and I have means for fastly retrieving those nonzero elements but that's something I assume ok so that's good so but I now do I take this whole expression everything here and define this as a new matrix called Kapiti so Kapiti I can compute efficiently if and only if this matrix 5 sparse so I need Phi a Kappa and I need Phi and if I both of them I can actually compute my information matrix ok so the next question is how do we actually compute this guy efficiently something we said if it is sparse and if if if if how can we compute that because if you look to the definition how we defined it we defined it here so this quantity over here was the inverse of 5 and now I need fine not inverted so the questions how to obtain Phi modern world okay so let's look how this function looks like so next next goal is actually this the next goal is we want to do that compute this element here as a constant time operation because we still want to achieve constant time operation if this matrix the sparse so under the assumption that the information matrix we get in from our previous operation is sparse that's the assumption we do then we want to say then we can actually compute this guy over here in constant time that's what I want to do okay okay sorry mr. bule mixed mixed up the order of the slides so what is the inverse of my Jacobian so what is Riyaz just the definition of my Jacobian so it was the identity matrix and then I mapped this small block over here which just hit whatever two nonzero elements to this high dimensional space and add that up okay and then I hope the problems I have to invert this matrix how do I get that inverse so if I look to this matrix this can be written as Delta plus a 3 by 3 identity matrix plus 2 N by 2 n identity matrix down here and these two elements of zeros so clear why this can be written that way now the overall number of landmarks so the overall number I maintain in my filter okay so and I want to compute the inverse of this guy to compute the inverse of this guy I can exploit the fact that this is a kind of a matrix I have here this is a matrix and this is a matrix and this is a matrix and these two matrices are zeros so it's like you can see there's a matrix which consists of four sub matrices where the off diagonal matrices are 0 only zeros and if I if this is the K I can in to compute the inverse of the overall matrix that can compute the inverse of the individual blocks which are nonzero so it's the inverse of this guy and the inverse of this guy so this power of minus one moves in here the inverse of the identity is the identity so there's nothing to do down here and here it's the inverse of Delta plus the identity this is a really great thing why is this a really great thing exactly so this matrix here was a high dimensional matrix 3 + 2 n + I to invert it and I simplify the problem in this way exploiting the special structure that this Jacobian has so that I only need to invert a 3 by 3 matrix that's a big win because it's something which is constant in the number of networks I don't care how many ends I have here because this doesn't change the only thing I need to do is actually update this guy here top okay so exactly 3 by 3 matrix if they're a big thing ok so what I can then do is I can say ok I kind of change this matrix I just pull up the identity so it's a 3 plus 2 n by 3 + 2 n matrix plus the rest and so I had the identity which I have to subtract from this term so it's minus VI 3 identity so it's just pulling out an identity matrix and if I do that I can actually write this okay they kind of find a new matrix which is this matrix and say so the Jacobian is just an identity matrix large identity matrix plus this additional term over here which I can write in again my mapping from the low dimensional space to the high dimensional space Plus this expression over here now you still everyone on board okay so our goal is to efficiently compute overall we are currently in the computing the information matrix we had our we said okay we can define the information matrix at the inverse of the covariance matrix its consists of two terms the first term is something we defined as Phi minus 1 the second one was our noise term our let me use the matrix inversion lemma arrange these things in a nice way okay now we have all cool matrices we can solve that quickly if my matrix Phi is sparse and now we are looking in how can I actually obtain Phi and why is it sparse and the first step to compute Phi is something we did already it's completing this term over here this is this guy oh this guy but it's just transposed and we know that this guy is already sparse this was something we know so we can put that together and we saw we have we have this expression over here so we can obtain this expression over here and this was just the matrix that we defined before so this sigh it's exactly this term over here where this is a three by three matrix so it's everything is super efficient and we just need to multiply so we have these two matrices we just need to multiply it with a sparse matrix so we have a matrix which is just identity in a little bit times a sparse matrix times the identity and a little bit so we always have the identity just copies everything over and then we have this a little bit multiply it with the constant number multiply it with a little bit and this little bit is also constant with three by three times constant number which I don't know exactly how large it is but it is constant times the three by three blocks so that's something which is independent of the size of the overall matrix and that's kind of the key thing that I can compute this term by having kind of three sparse matrices this guy's sparse this guy is sparse disguise parson is actually better than sparse because they are basically those jacobians of basically identity matrices for identity I don't need to do anything just if a small number of element small number of elements which are non identity and they introduce some costs but it's constant doesn't depend on how big that matrix is okay so just to kind of say repeat what I've said so these make this G our identity matrix except on 3x3 blocks yeah the information matrix it's sparse and this implies that this term can be actually computed in constant time kurz sparse identity with a little bit identity with a little bit so I can if I do kind of an in-place update so I just update this estimate without needing to copy it over in memory and the copy operation would be linear linear on the size of the matrix but I do that if I do that in place in that matrix that can be done efficiently so it's clear what I mean with implies operation so if your watch memory block which you have allocated which may be huge and if you copy that over the copy operation would be extremely costly so what you do is to do the operation in place or in memory you just override the individual values in your matrix you need to change and although all the matrix is large you need to just not change a small number of values we have the result that's what's in place so you can't really if you have to copy this it would have to copy this matrix matrix they're going to be expensive but you can kind of change it in place okay so we said okay this is again a more sophisticated explanation why this is a constant time operation so we said this was our matrix before we said this is identity plot 3x3 block identity plot 3x3 block this is sparse so I can multiply that out now you obtain this expression over here and so I have the old information matrix over here and then again this again identity 3x3 box times the sparse matrix plus the sparse matrix multiplied with my identity in 3x3 blocks or this all easy going and here again the identity 3x3 block except 3x3 book information matrix and this guy and this are all of ratios can be done efficiently this can be seen s and additional lambda which has a constant number of nonzero elements and this just needs to be added to the information matrix what they typically do is you there means for representing sparse matrices and they basically only store elements which are nonzero and there are fission rates for indexing that so you can do it for example one way to do this using hash table so your the hash table and the key to your hash table is the row and column index and you compute from this IJ you compute a hash value you look up in your hash table and you'll retrieve the element so the hash operation has constant time so you can constant time obtain that given that you have the index you can even iterate over those values depending on the underlying hash table represent a model that you use so it's one way to implement a sparse matrix okay just to sum that up you compute this guy over here which was the identity matrix plus a little bit then we can use you can use this to compute my lambda based on the lambda I can compute my Phi then I can use my file to compute my Kappa and my Kappa to compute my information matrix again I introduced a few expressions over here a few variables over here but these variables if you don't introduce variables it gets I can't put that stuff on the slides gets very very huge and these are individual values which are which can be computed in the individual operation put together and I obtain my information matrix so I can turn that exactly into an algorithm so these three steps you have seen in the beginning and then I compute all those individual values and put them all together but we have done up to line nine of our algorithms algorithm we computed the information matrix for the prediction step but the information matrix is only part of the game we needed also the meaning and the information vector but it was the hardest part so we can from now on it's not that tricky anymore the next thing was we need to compute the mean right but the mean is easy it's kind of what we know from vkf what we have the the predicted means to have the old mean which are luckily maintained that's not an advantage for me and this is this matrix which maps from low dimensional space with a high dimensional space and it just Maps my vector Delta and this were the vector Delta was kind of the elements of the non linear function and of the the non linear function was kind of the old value plus some Delta this is exactly this Delta as we defined it before so it's just kind of what results from the odometry motion model it's like yeah hear the bells just copy/paste from the algorithm so this was the matrix and this was very kind of then on the so the nonlinear function G plus my ex-prime was G of X and this was written in a way that this X plus some Delta and this was kind of a structure and if we have this structure of G this is the expect to use the Delta you've seen there that's it Madonn that was easy so we do you mean opening plus our Delta which we in can easily compute so we now have computed the information matrix we have computed the mean the last missing part is the information vector okay what is the definition of the information vector written over there the information vector is the inverse of the covariance matrix times the mean so that means my not good in drawing these guys can also write this is the information matrix the information matrix times the mean that's cool that's all they think the everything we know you know all that so it is the information matrix times the new mean it can be explain it like this the problem is if I would compute these guys in this way that will be a linear time operation right because I have a vector the mean vector which is three plus two n dimensional and I multiply that with an information matrix which is sparse but at least I need a little it's a linear complexity that this has that would be stupid I mean did all the magic tricks to get it a constant time and then the information vector screws it up so it's not going to happen but therefore we are not done in that step doing just a simple operation and we have to change our expressions in a little bit but what's what's written here here's expect basically exactly what we had here on the blackboard and now we can say okay let's expand this over here so the old mean is the old information matrix inverted times the information vector so click over here so this is this equation over here plus nothing changed over there okay now I can kind of get time to get rid of the brackets then I have the new information matrix times the inverse of the old information matrix times the information vector and here the information the predicted information matrix times my mapping function and my delta T now what I can do now is I just look to the first term over here and just add to or take the first element over here the predicted information matrix and just add and up subtract an identity matrix to it so you can say okay that's an identity matrix and another identity matrix just sorry exactly I just add them up so that they plus one minus one so nothing nothing bad happens in here then all right actually why don't these are not once these are zeros that's completely stupid what I wrote here it's not a multiplication it's a sum these guys are zeros these are not once yeah so I I I add twice zero by adding the matrix subtracting the matrix adding the same matrix subjecting the same matrix right zero not one that sucks that's a zero okay and now I can just rearrange those reshuffle them a little bit so then I write that as this minus this guy this guy - this guy and yeah multiply it with all the rest Plus this guy multiplied with with the remaining sorry this remaining part so I group these two guys together these two grass together this one together and then I partially multiply out this element from the matrix so I obtained this and this strictly the same times these guys over here Plus this element time these guys over here which is over here and the rest maintained there and now nice that I defined some of those values before this kappa and all the ugly terms because I can reuse them here that's a reason why I did that so this guy is simply minus Kappa perfect already computed it this is equals to lambda perfectly already computed it this guy here is my old mean perfect everything done this is an identity matrix that simply is easy this is information vector great and my rhythm remaining terms I had so I can write that as just rearranging the elements a little bit moving there to the front so the new information vector is my old information vector plus some extra elements in here now remember we had those matrices and we had nice properties of those matrices for example my my lambda matrix had this function it's basically everywhere zero there last few elements which are relevant and then I subtract something else which is basically zero just a few elements and I multiply that with a matrix with it with the back door which is fully filled but these guys this matrix is basically zero everywhere except of a few well you second kind of cherry-pick the dimensions from my previous mean to to perform the update so the resulting vector here is basically zero everywhere except of a few elements and again if I can do that fast constant number of blocks elements have to look up in the mean previous mean and it's something I can do in a constant time operation and then I have just this matrix C again which is basically everywhere is zero except of a three by three block multiply it was a sparse matrix it's great everything zero except of a small blocks multiplied with a sparse matrix perfectly can be done in constant time yeah so that's it exactly that was my equation nothing else which needs to be done here so I can turn that into the algorithm so the this is my information vector that I computed exactly as I written it before this is my my mean exactly as it was done and we're done it's perfect so we have the predicted information vector we have the predicted information matrix and we have the predicted mean yeah we're done with the first step of the algorithm which was the prediction step but that was the hardest part the rest is really not that tricky so what we've seen here is under the assumption that my inca input information matrix from the previous point time is sparse you assume this guy must be sparse that means it has only constant number of off of non-zero off-diagonal elements if this is the case and I can exploit that the motion update only effects a small constant subset of the dimensions of the state vector it's just the motion of the robot only affect the poles of the robot it doesn't affect the landmarks and are these two assumptions I can actually end up computing those three quantities in constant time not doing any further approximation then what we have done before just by exploiting that this matrix is sparse if this matrix wouldn't be sparse the incoming matrix everything would screw up that's the reason why if you do the regular Kalman filter or the regular information filter and you don't exploit any sparsity and you don't explore that only a small subset of the dimensions are updated then it's simply a costly operation okay so our said the next thing I would like to discuss is the measurement update step because they're kind of the two importance so we could have two element key element that we know from the extended information filter therefore I decided to prevent both the first in the third step first it boring the the stuff in the middle then the second step and first step that's something we're going to do next week okay let's look to the measurement update this is identical to EKF slim so the first part is just copy paste from UK Epsilon what did I do I have a matrix Q which tells me the uncertainty of my observation I assume here again I've range and bearing observations at we did that for the the common filter BKF slam I have my observation which consists of this range in this bearing observation I assume to have known that Association so it was the assumption that what I ever see a white arc through my sensor I know that this is a white mark whatever J in my in my in my state representation so I know what I've got what I'm observing then I this initialization if I've never seen the landmarks I need to initialize it and otherwise I compute the predicted observation and the I collected these Delta terms that we used for the Jacobian and again here I need the mean estimate therefore it's important to have the mean so I need the mean for the friction step and I need to compute the mean for the for the correction step you know to compute the predicted observation and then I have again my this matrix H which is the Jacobian of the observation function exactly the same as we did in the KF the only difference compared to the ek a compared to the EKF these two steps over here this is because we are now in computing the information vector and the information matrix and the mean and the new covariance matrix but these are exactly the equations we put in from the from the scythe curse this was kind of the update step in the scythe so everything kind of how to initialize rent marks how to compute the Jacobian it's all exactly as it was in the EKF the only thing we don't need to do we don't need to compute this Kalman gain and and weigh them this is business done as it was as we derived it for the for the gif these these terms over here yeah yes this is for 1/n work therefore we have this loop over here and there will be yes that herbs each leg back that I observe yes you can do two things you can either update them separately or you can stack them in a matrix there's or subtile difference between those two things the subtle difference is you need to do an inversion of this there's somewhere an inversion involved of this matrix q the more elements you stack the ball costly this operation is unless they all have diagonal elements so if you have just so if the if you observe to light marks at the same point in time when they're giving you know the pose of the robot those are independent of each other you can do that one at a time that with the for loop that's all you can stick let me do that yes you can do that because the update it's the same point of time so it's really as it is written there the algorithm works perfectly so it means if you up if you observe the light works you don't have between those two light marks observe at the same time you don't have any links in the off diagonal elements if you have that because you've a sensor which does provides you with that information then this may change the game but as long as this is not the case as long as so if you would put in a second observation here that we're diagonal matrix where you have again the same values repeated over here there's nothing wrong with that okay and yeah there's it measurement update done that was easy it was i it was just copy paste from the common filter that we did before and then replace these two lines from what we have derived last week from the ëif that's just taking two parts we do already stick them together done so that means we are done with the motion update and the measurement update and both are constant time operations so only needed to change a constant number of elements in my information vector and a constant number of elements in my information matrix and a constant number of elements in my mean estimate and that was kind of it's kind of the cool thing it's a constant time operation under the assumption that the incoming information matrix is sparse that's it for today what we're going to do next week we say okay we have that now we know how able to do that how can we actually ensure that this matrix is sparse and stay sparse over time so we'll do next time as we do the specification step that's what we're going to start next week and then we will also discuss the the update step so yeah what's the update step does is how do I obtain the the corrected mean given the predicted mean in an efficient way because I need that to compute the the predicted measurement hope is the predicted measurement I need to know where the robot is according to the belief and therefore I actually need that information okay um that's it from my side for today you're released a little bit early your 5 mins earlier then that's really the case what you should do at home is kind of revisit the concepts here make sure you understand what's going on make sure you understood the key concept and then we are starting next week with the specification and the update step it doesn't hurt looking through the slides of next week and let say week 1 through the specification which is also maybe non-trivial to prepare for next week and then we probably do whatever one hour next week to finalize these fortifications and the update step for the sparse external information filter then we will do a short wrap-up about what we have done so far so we will revisit the quickly the Kalman filter extended Kalman filter UKF yeah if scythe just very very briefly in ten minutes to kind of wrap up everything we have learned about the Kalman filter so far about comma filtering and common filter families for addressing this problem and then in two weeks move over to the next paradigm and yeah look into what's next what's behind colored filters that's it for my thought thank you very much and see you next week
|
SLAM_Course_2013
|
SLAMCourse_02_Homogeneous_Coordinates_201314_Cyrill_Stachniss.txt
|
okay so then welcome to the course today um today as announced yesterday will only be a quite short lecture um on homogeneous coordinates which is one way one alternative way uh to using the um uking coordinate frame which is frequently used in robotics to express Transformations and that's something which the students of last year said it would be nice to get an introduction on that and therefore we have a short whatever 20 to minute probably introduction to homogeneous coordinates and what they are used for what we can do with that why they are kind of a useful tool for us and you will Theos homogeneous coordinates especially Transformations expressed in homogeneous coordinates um throughout the lecture a few times so that you know what you're going to see after that we will have in one hour exercise um kind of an introd short introduction to matlb or octave which we will use in the course and then directly exercises with this homogeneous coordinate that you see here so after half an hour lecture approximately 1 hour of tutorial and then from next week on we continue as planned every Monday is a lecture and um Tuesdays from 10: to 12 is the tutorial okay so let's start why are we why I why do I claim that it makes sense to leave the cartisian world um the reason for that is that a lot of Beering only sensors that mean sensors which do not measure the distance to obstacles but just the um the orientation where an object is such for example a camera camera doesn't give you any distance information by counting the pixels or by knowing which pixel corresponds to which angle you can estimate at which angular orientation an object that you perceive is located but from a single image you can at least if you don't integrate any background knowledge or any additional information cannot tell how far a point is away and that means that the camera generates a projection of the 3D World onto a 2D image plane so just it's a projection of any 3D Point object whatever it is onto a 2D image and um the ukan geometry is actually suboptimal to describe that we can do that so it works it's nothing which kind of hinders us from doing that but it's suboptimal in the sense that the mass can get a bit complicated so the equations for doing that are suboptimal especially for describing the central projection that is you have this is probably the camera model you've learned at school so you have one center of the projection and every point that is projected On a par on an image plane which is parallel to the um Optical axis go through that uh point and um as I said the ukian geometry if you express all that in with ukian math it can get a bit complicated and therefore people introduced another space that this or another kind of geometry which is called projective geometry which which explicitly handles this or is kind of well suited to handle these projections and as a result of that the mass becomes easier and therefore it makes sense to actually look into that representation um it should be noted that the projective geometry doesn't change the relation between objects and space so we can repres we we model the same relation of object in that space um the only thing is that um as I said that the math becomes easier Transformations especially that's what we are interested in can be handled easier um in kind of a very nice and simple form and that's kind of the reason why we use that and we look especially into homogeneous coordinates and homogeneous coordinates is a coordinate system um used in projective geometry and um it has two nice advantages the first one is that we can represent what we call points at Infinity in a very nice way points at Infinity are points which are infinitively far away but where know the distance if you think of a camera You observe an object um and you know which to which Pixel It corresponds the object you know the angular orientation of that object relative to to you but you have no idea how far it is away and um it can be that this point is infinitively far away if you think about your general representation of space with X and Y coordinates it's actually not easy to describe a point which is infinitively far away or points at Infinity especially you cannot do that um with um an finite coordinate so you can whatever compute have an additional factor which goes to infinity and then you take into account the angular component and then generated Point Infinity but that's kind of not that nice um in contrast to that the homogeneous coordinates explicitly allow us to represent points at Infinity the second thing that's the thing which is kind of most important for us in this course here um is that we can use a single Matrix to represent a Transformations so it means um Transformations which include rotations translations um sheer um scale changes we all can put that into a single matrix multiplication and that's what makes it very very easy for us to do that if you think about the 2D geometry and you have the vector in 2D you want to transform this Vector you could do that for rotation so you could take a rotation Matrix multiply it with the vector and then you got the rotated Vector but couldn't do that with Transformations and so in the homogeneous coordinates the whole uh Transformations we can actually or not the whole Transformations but um the um projective transformation which include also the a Transformations which include also translation rotations can be nicely put into a single Matrix and that means by single simple Matrix multiplications we can express coordinate Transformations which is really nice and that's actually the main reason why we do that so although points Infinity are very important um this is nothing which will be densely covered in this course uh but the last Point here that's something which we will use frequently in this course um and therefore it makes sense to know what homogeneous coordinates are okay let's go a bit more concrete what is a homogeneous coordinate the represent representation of X of an geometric object it's just a point for now is homogeneous if x and a scalar Factor Lambda mply with X represent the same object given that Lambda is is not zero so we have some Vector x and x as well as any scalar multiplied with X should be exactly the same object so that's just the definition one example for that is we have a point x which has three coordinates U V W If U Can be expressed as W * a new variable X V can be expressed as W Times a new variable y then this point and this point xy1 represent exactly the same object because you obtain this one over here by multiplying this term over here by 1 / W so UV W is equal to xy1 * 1 / W and this is exactly the Scala Factor okay okay so what can we do with that so this is our homogeneous Vector exactly the same equation we had before and in and this should correspond to the point XY in the ukian space so if I want to go from the ukan space to the homogeneous space it's sufficient to add another dimension to my vector and add a one down here the new dimension I add so for representing objects in the 2D World which I used so far a two-dimensional Vector for I need now a three dimensional vector and to map from the ukan space to this homogeneous coordinates I just add a new dimension and this Dimension takes a value one that's all I do okay and then I can if I I have a point in homogeneous coordinates U V W over here if I want to transform that one to the cartisian space I just take the last component and divide all elements by the last component just kind of normalize it um by dividing the whole Vector through one one divid by W therefore this value gets one and then I can neglect this component one and this gives me my point in the ukian space okay so we can visualize this now if this plane over here is our is R2 so our regular CES in two- dimensional space and the three-dimensional construct here is um represents my um homogeneous coordinate system then this R2 is just a plane at the last coordinate so the Z coordinate equals one so that's one in that threedimensional space and our point x in um the edian space say our Point XY down here is exactly this point this single point in the plane in my threedimensional space this represents any object that lies on this on this line which go through the uh origin of the coordinate frame through that point x further on and this is exactly this um this Lambda this scaling factor that I add so if my point x lies in my homogeneous space over here that means it is exactly the same unchanged Point um down here in my uh so kind of in R2 in my two dimensional cartisian space and you may see that that wherever this point lies on that line it will fall exactly on this coordinate in a 2d plane and this is why maybe you may be able to see the connection to for example a camera wear point which somewhere lies on line gets projected on one specific point in my image plane so if you want to map from any point on that line we just divide all coordinates by the third coordinate that means the third coordinate gets it's one so it moves down here on that plane and then we have our coordinate in the um cartisian space so from with normalizing we go to the cartisian space if you want to go from the Cartesian space to the um homogeneous space we just have to add a new dimension and put a one in there is there any question about that okay at the result the center of the cord frames are 0 0 and a one another component so whatever value I put in there I put a two in there would represent the same object if I normalize it through um the first coordinates because they are zero so 0 01 or 0 z01 are the centers of the coordinate frame and this allows us now to express points at Infinity so points which are infinitively far away this Cas is quite easy by just taking our coordinate U and V and add a zero down here because if you want to map this now to the ukian space we would need to divide u and v by zero and if you let's say take a value which we let approach there mean this point moves away on that infinitively far away on that line and so by explicitly having a zero in here we can represent points that are infinitively far away and um the the nice thing is that we can do that with three coordinates here which are finite so we don't need infinite large values down here as we would need to do that in the um in the cartisian world we can now do certain things like checking if we can express lines in a very similar way and can then easily check if lines are parallel or orthogonal um without caring what the individual Dimensions actually look like and that's kind of a very nice property that's what it's often used for if you work as cameras and want to take into account points which are infinitively far away there's nothing we do here in that course at least to not to a great extent and therefore um I don't want to go more into the details here um I just want to look more on um the Transformations that we can you express with those homogeneous coordinates because that's something we are going to use here in that course okay so the first thing we can do um we can also do that for 3D points we just have a than a four-dimensional vector which is here the additional value T then we normalize by T so we have u v w t and exactly the mapping to thean space and again if we M map back we have our values here we just add one down here and we are in our homogeneous world so the maping between Both Worlds actually quite easy okay so how can we describe Transformations using this um coordinate system um I said you before that we can express everything Thing by a matrix transformation uh the only thing is this Matrix should be invertible so we have if we have our transformation matrix M and we multiply an ector x with m in homogeneous coordinates we get another Vector in homogeneous coordinates X Prime over here and what I want to illustrate now that we can actually by having a special structure in this Matrix M we can express a lot of interesting Transformations um that we cannot express with the single Matrix of operation in the regular cartisian space so we need to have a more complex transformation and here we can nicely put that into one single Matrix then Express this with a single Matrix so how does this Matrix look like so assume we have the we are living in the 3D cartisian World so we have a 4D vector and therefore a 4x4 Matrix over here there our Matrix M so this should indicate that it is a 4x4 Matrix and the first thing we want to do is we want to express trans transtions um so just pure translations in this space that was something we couldn't do in the ukian space by a matrix transformation we would have to add a vector to a vector but we can't um in the in the cartisian world we cannot express this by a matrix multiplication this is how the Matrix look like so our Matrix M has any scale of scale factor this doesn't matter because we're living in the hom in the homogeneous coordinates so the scale foror can be any value and then uh M has has this form this guy over here this I is an identity Matrix and especially a 3X3 identity Matrix so it looks like this this value here is simply a vector of zeros threedimensional Vector of zeros transposed so that's kind of um here three 1 2 three zero values that's regular one and this Vector T is a translation Vector this translation Vector is has three components and the translation X the translation in y and the translation in that so it exactly looks like this and we if we build up our Matrix in this in this way we can multiply our transformation matrix M with any Vector we will get a new Vector it has exactly the same values but it is translated the X component by TX the Y component by Ty and the V component by TX okay and the N thing is we can do this with a matrix operation we don't have to add a vector as it would need to do that in the cartisian world so there's nothing bad about adding a vector just to say that but the ni thing where we can express everything here with one Matrix operation so if you combine that now with different forms of transformation like rotation matrices or different things um then it's inconvenient if you always have to have a multiplication and then add something and then if you stack them or chain them all together these uh Expressions can get a bit ugly and um here everything is done in Matrix form I can just multiply my matrices um and then in this way um execute for example a large chain of Transformations okay so trans translation is nice but typically we also need to rotate things in space so what happens if we want to express rotation um as well want to express a rotation so if you want to express a rotation our Matrix M looks like this again we have our zero down here we have our one here this now is again uh a zero Vector with three elements so this was the vector where the translation Vector was sitting before so this is zero that means we don't have any translation in here and here we have a matrix R this R is the 3X3 Matrix this is a regular transformation uh rotation Matrix as you all should know it from the um cartisian world so rotating about around an axis X Y in case you forgot how this one Matrix looks like I have one slide to remind you on how these matrices look like so in the 2D world it's easy we just have one axis of rotation around we can rotate and so if you want to rotate a vector X um around uh the center of our coordinate system with uh let's say one angle Theta here we can express this by this rotation Matrix multiplied by my Vector X and the output I get from that is this rotated Vector X so even in the CES world we could use a standard rotation Matrix to express rotations of um of vectors or of points in the 2D world this is familiar to everyone is familiar with this concept right everyone should have heard that perfect okay this was for the 2D case in 3D it gets a little bit more complicated because we have three rotation axes you can rotate on the xaxis yaxis or that axis and there are a large number of different ways on expressing transformations Sol around which which AIS should I um translate or rotate first or how how where are the AIS or do I only have one AIS um which can lie have any arbitrary uh orientation in space so different ways for expressing uh rotation in 3D there's just one way for doing this these are the standard transformation matrices so this Matrix over here expresses a rotation in 3D around the x-axis with the angle Omega so this one stays constant so here's a one in the in the X in the First Dimension which represents X because I rotate around the xaxis so a point on the xaxis is not changed and I have exactly the same rotation Matrix I had in here um in the two other components and we in a very similar way we can do that for y and for the Zed coordinate um here that coordinate is one and here you have one in the second dimension and this gives you three different rotations one around the x-axis one around the y axis one around the Z axis and if I want to express any rotation in 3D I can actually do that by with three rotation matrices so rotation about x y and that just multiplying them here you can already see we have three rotations we just multiply these matrices we get a new Matrix which expresses the rotation around all three X's that's something which worked in the Cartesian world but only for rotations not for translations and in homogeneous coordinates we can now use this also taking into account um the translations okay so going back we have for our simple rotation as we had before we have our rotation Matrix R which is exactly this R Matrix over here or this Matrix over here depending if we in 2D or in 3D and that's the way I can express a rotation and now we can actually combine both we can take into account a translation and a rotation and this Matrix looks like this so we have our Matrix R here this was where the identity was before when we look about translations the translation Vector here and the the the bottom row is not changed this is now has now six parameters so we have three parameters in the rotation three par parameters in the translation and that's kind of a rigid body transformation or a motion transformation over there called and that is the thing we will use frequently in this course so to for example we have a measurement in the coordinate frame of the robot of the sensor we may need to compute where this point now given that the sensor is not mounted exactly at the center of the robot and given that the robot is somewhere in space so we can actually Express this by multiple coordinate Transformations so first um from the in the from the sensor frame to the center of the sensor then from the sensor to the center of the robot and then from the center of the robot to um the point in the world depending how one expresses we have to do that a forward step or a backward step so um that depends on what we actually want to compute um we can however do more so at least you should know what else you can do with that so we can have soal similarity Transformations which have another dimension and this Dimension scales the object so we can make an object smaller or bigger and this adds a scaling Factor M here in front of the transformation matrix so it's not we cannot multiply it here in front because that would be normalized away because then M would one would be also not multiplied with M then this is not the case since we don't want that we have to put our M here so if we do it in that way we can um actually have um so we can actually scale an object and we can also have a Transformations which have 12 parameters then we just have a matrix a over here um which has nine parameters and these nine parameters um also include different scale parameters and also Shear so you can Shear objects there's nothing we're going to use here but we can actually use this we can use this um the tool of homogeneous coordinates this coordinate frame to express this in in very elegant way just by a by a simple Matrix then we can just multiply multiple of those matrices with each other this way we can chain those Transformations and then take the resulting Matrix multiply a vector with this Matrix and then obtain the final transformation just as an small overview in the 2D world because it's easier to visualize you have all the different types of Transformations so you have a simple translation this is again here we have the identity Matrix in this now 2 by 2x two block over here and our translation X and Y um we can actually mirror object at the uh at the y- axis then this guy gets negative we could do the same with the x-axis if you would like to um we can have a rotation so we have we have our rotation Matrix over here again this is R um we have can combine the rotation and the translation so with the rotation Matrix over here and the translation Vector over here in the 2D worlds again that's something we are going to use here and then you have multiple things like scaling the objects additionally and um you can even have a scale difference in the different X's if you have a different scale in X and different scale in y you can Shear symmetric Shear that both exes are sheared in the same way or an asymmetric Shear this is not the case a fre Transformations or general um projections so we can put do all of that in this nice space so if we have our Matrix M here which we can do a transformation the next nice thing is um is that we can actually invert our Matrix to get the inverse transformation so if I carry out a transformation by multiplying m in X I want to kind of want to undo that operation I can multiply the resulting Vector so with m m to the^ of minus one X Prime gives me my original Vector X that's really nice because this way I can actually also invert transformation very very nicely without needing to go into the details of the transformation we just need to invert The Matrix yes please and this our M always invertible m is always invertible yeah so this this results from the way you construct those matrices that you have this one over here the zero Vector here translation goes here and this is a rotation Matrix the rotation Matrix is always invertible and as a result of that um you can show that this if you construct m in this way it is always invertible um yeah the only thing you need to take care of is that Matrix um Computing the matrix product is not commutative so it's something else if you multiply Matrix M1 with M2 or M2 with M1 so in general these two operations will not give you the same point because of course if you rotate first around the x-axis and then around the y axis you get a different result if you first rotate around the y- axis and then around the x-axis so the order in which you conduct these mrix multiplications needs to be taken into account this is just different transformation that you appli something else if you or first you have this object here and I can either rotate it first and then move it somewhere in space or I can move it somewhere and then rotate it depending on as I three rotations different results may be the output okay so said about that and again just to wrap that up the the thing which you most use in this course here is um expression of motions so we have a rotation Matrix with three components and a translation Vector with three components so we have six components we express the motion of objects in um the 3D World or if we only in 2D there just one rotation and two translations exactly the same form except that this would turn into a 2 X2 Matrix and that's what we're going to use extensively in this course there's no kind of black magic behind that and I hope you in the next hour will Ex explore that a bit more on your own computers uh with octav and fabito will take care of the exercise and guiding you through that process so that you get used to this concept and um and feel comfortable with that so to conclude um homogeneous coordinates adjust an alternative representation to the edian space uh for geometric objects and it is a coordinate system which is used in projective geometry the important thing is that I have um objects or geometric objects vectors and they represent the same two vectors represent the same object if they just uh if they are just scaled so if I have X is equivalent to any Scala time x as long as the scalar is not equal to zero that's kind of the the important definition in here and um through the extra Dimension that I have um a lot of Ni things can be represented like points and infinity through this extra Dimension I can also integrate rotation translation and all the other things you have seen here into Matrix operations um and in this way I can easily chain those um Transformations and that's the reason why we often use that um if you want to know more there are a lot of geometry textbooks or projective geometry which explained that actually found the Wikipedia page quite good as as a good introduction to read through that there are another references to longer tutorials um so if you just want to reread what I've presented here um there probably the EAS I put it here because it's for you probably the easiest resource to look up although there may be better descriptions but um I actually found it quite nice that's it from my side for today um are there any questions about that what you have seen here okay so that's it I hope a lot of questions will come up in the next hour when we do hear the exercise um yeah then we see each other next week on Monday thank you very much
|
SLAM_Course_2013
|
SLAM_Course_13_GridBased_SLAM_with_RaoBlackwellized_PFs_201314_Cyrill_Stachniss.txt
|
um today we look into the last chapter of particle database simultaneous localization and mapping that means we present a variant of half slam that is it allows us to build grip Maps so so far all the mapping that we have done was using that mark based approaches to slam that means we assumed to have landmarks in the environment that we can identify with our sensor and that we can use as a based on typically based on features we extract identify those landmarks and build a map of those landmarks that means we store the X to be X Y or X Y Z location of both landmarks in the environment and we so far discussed two main approaches or paradigms to do that one was the Kalman filter based approach and the second one was the particle filter based approach using fast them and what I would like to present today is to look into a variant of haslam that is designed to operate on grip maps so I've introduced grip maps I think as two weeks ago in combination with a short intro to scan matching or very short introduction to scan matching and these are two concepts we are going to explore today in order to build particle filter based system for simultaneous localization and mapping for building grip Maps the advantage of those grip maps in brief was that we do not require predefined feature the extractors or that we need to have knowledge what kind of features we are going to observe in the environment or landmarks we have in the environment and that we can just take observations of obstacles for example from a laser rangefinder in order to build a map of the environment and the map of the environment was a great similar to an image where for every cell on the image rotation that would be a pixel we store the probability that this cell or this place in the environment is occupied or free that was kind of the key idea of occupancy grab maps so in the end we get a map which is very similar to an image then every pixel is in the perfect case either black or white white means free space black means obstacle and as we are never certain about what we measured and what the world looks like we maintain a probability a so-called occupancy probability for each cell and if this probably tends towards one with a high likelihood this place is occupied in this probability tends towards zero that means that this cell is very likely to be not occupied and that means free and then we used introduced particles particle filters first for localization as one way for tracking the pose of the robot using a cloud of particles for every particle suppose hypothesis and we given a map of the environment we could very effectively localize the robot we did that very very brief discussing the individual steps of the particle filter algorithm and then last week we looked into fast slam for let mark pay for landmarks that was a the first particle filter based approach to slam in the robotics community that worked efficiently on large scale or larger scale Maps so up to say a million of landmarks we can actually use fast slam and the key idea was here to separate the estimation of the belief about the trajectory of the robot and the map of the environment into two parts one part was estimating only the trajectory of the robot using a particle filter and then for every particle doing a mapping with known poses for each individual sample so every sample or every particle carried its own map and this way we get a belief or don't believe about the trajectory that the robot took and the map of the environment because we first estimate the trajectory of the robot men say given the trajectory with what will look like this or lattice the map in this way we get a joint belief we did that for for four leglock by slam exploiting two fixed effects one that we can track a low dimensional space efficiently with a particle field that we used it for the post and once we know the post is mapping is easy they were kind of the two insights that we exploited in this approach and today I would like to look how can we actually do that for building grip maps so not building a map of features of networks but to build a dense group map separating the space into occupied and not occupied cells we also use this video you actually have seen before what happens if you just take our Dhamma tree information and perform mapping with known poses that was the algorithm of mo hammock analysis that I introduced in chapter 10 I think of that course and you can see here if we just use raw Dhamma tree information in order to it's kind of the ground truth poses and just perform acting with non poses we get a map of the environment that is not at all suitable for navigation tasks so you can see we can guess that here are some corridor environment parts of the environment which looks like a corridor this may be rooms but it's everything else and that is wild guessing and so this does not work so the assumption or what the the assumption that we made in this approach that we know the poses of the robot based on odometry actually kills this approach so this assumption does not hold so we actually explicitly have to model the process of the robot in our belief that's something we already knew as we use this for in the common filter based approach in landmark based approach but this wasn't nice illustration because when the grip Maps you directly see is with the unity or own eye of the human that this map is inconsistent or it's very unlikely to represent what the environment and reality looks like so in the key idea today is the question is can we actually exploit the ideas that we used in fast them in order to the grit based variant of taslim that means using this factorization with Ronald localization of our belief and split it up into two parts and then he was a particle filter to will estimate the post and then use a per particle dependent map to estimate the map of the environment so that worked in that way so we had our belief about the poses of the robot in the map of the environment and given our observations in our controls and we split that up with this idea of raw black polarization that we have one posterior about the path that the robot took it's similar to localization except that we don't have a map given here and this is what we will use the particle filter for exactly in the same way as we did that for fast and for landmarks and then we have the second belief over here which is the map posterior given the pulse of the robot and the questions how can we actually estimate that and obviously we expect this to work differently in the grid based setting because the map representation is different so the XT so what you have is if you have an odometer e commands you will have n plus 1 poses because so you have one more and what X 0 is to be used for is to set the initial uncertainty so if your initial uncertainty is basically zero they all start and 1 all particles will start at one location otherwise you can distribute the time according to an initial belief or you can actually use this to represent the center of your coordinate frame and then what you typically do is you assume you don't have an observation at your at x0 if you execute the motion command and which leads to 2 X 1 and then X 1 you get the first observation that one that's kind of limitation and therefore you have you have T observations you have T you have T controls and of course X 0 here is ignored because we don't have any observation at X 0 and therefore if you don't have an observation we actually don't need the post unless we want to set the center of the reference frame so I if I need it I could edit here but typically we don't need that any further question at that point ok perfect so very similar to the white mark based approach so once we know the posters of the robot actually the mapping problem is easy right and this is exactly the same fact we want to exploit for the grid paste approach and if you look to the graphical model that we obtain in this setting it's exactly the same one we had before except we now have kind of this single map estimate and not the individual landmarks split it up separately but except of that the graphical model is exactly the same you can also see that you start at 0 and then your 2 X 1 and the control/command oh that is wrong so it's where X 1 2 X T this is an old potation which has changed during the how we you said before and how the probability of robotics book uses so this is just an old figure so this would be X 1 2 X 2 you want to you T sorry for that okay and so the key ideas of the grid based approach - - so I am using raw black glass particle filter it's actually exactly the same as for the for the network based approach so every particle represents one possible trajectory that the robot took every particle maintains its own map and every particle updates its map using mapping with known poses so the algorithm of moravec analysis that we discussed how does it look like here is an example so consider we have three particles which is obviously number which is too small for for a realistic problem but just as the illustration so we have three particles which travel around through the environment and every particle as I said before has its own map so this is a map of particle one particular two and particle number three and as they have a different pose estimates or different trajectory estimates they obviously will end up with different map estimates and you can especially we are here at a point where the robot close to dupe so it's your what started over here traveled around the trajectory kind of close to loop over here and if you compare the map of particle one and particle number three you can see that or at least if you were kind of a little bit trained to what to observe those occupancy group maps but you here have a repetitive pattern this guy and this guy is actually the same obstacle in the environment and you see this kind of ghost corridors over here and this results from the fact that the robot simply took a wrong pose estimate go ahead and pose estimate and re-entering the loop so this map which is actually inconsistent so there are obstacles in the environment in the map which do not occur in the environment and conscious of that particle number one actually did a pretty good job or decent job to estimate the trajector of the robot at least the map looks visually consistent so based on it we look to this part of the environment that seems to be a decent or a good match and down here particle number two you can see a slight misalignment here so it's a small shift but it's not as bad as particle number three so if you look into the important weights that we obtain in the particle filter application you expect that particle number one has the highest weight particle number three has a lowest weight and particle two sits somewhere in between be crucial observations that the robot obtains do not fit that well with the model build so far for particle number three but it fits quite well for particle number one so we expect to have a smaller importance weights and the particle number increases in this example okay so that was kind of exactly the same eight years we used in fast slam one fast lane more fast and one for landmark case just apply to grip maps right so it's easy so let's see what the result of this approach will be unfortunately that is a typical outcome if we do this so this was a map built with I think around 2000 particles using exactly the except exact fast someone ideas we did which we discussed last week just used it for the grid base case so this does not fly it doesn't seem to work so why is this not going to work one of the reasons is that we still need to maintain a quite large number of particles if my motion noise is high so if I have a high motion noise I need a large number of samples to cover the possible state in which the robot can be in with a sufficiently high number of samples so I need to kind of cover the areas how high likelihood quite dense with samples that is important and then in order to have that and this is one of the problems that we have here that we don't have enough samples to cover this the poses the posing certainly well the second reason is that this kind of representation the metro presentation is a little bit more brittle if you have a high post uncertainty compared to let's say landmarks where I use at a Gaussian estimate for my landmarks because if I have a data Association and I kind of have a sparse number of light mix I can say okay this observation corresponds to this landmark this observation corresponds to these landmarks and then by doing the update the position of the landmarks will adapt that will adjust this is different for the grip map where just update every grid cell independently of the other grid cells so if I'm misaligned the map is much easily a screwed up compared to the to the landmark based case so a grip map built with known poses where then known poses are wrong is to be less usable for navigation compared to let mark based map obviously the lent more location of landmarks may be also slightly inconsistent or may be inconsistent if you observe them from a pulse which was not the right pose but at least the map is often still usable for navigation you may not be always it the same at the correct position in a global reference frame but you're still often able to line the robot with respect to the local estimates of those nine marks okay so this doesn't seem to work so we have a high emotional uncertainty high motion noise and not enough samples to cover that so one of the ideas is to actually let us improve the pulse estimate before we apply this approach and see if this actually works better and what is one way to improve your pose estimate an easy way that we discussed already in the lecturer yes that is true let's go for the easiest way to do the - to incorporate an observation exactly scan matching so the easy thing we can do is okay before we run this algorithm so we don't change the algorithm we just run an incremental scan Metro before that means we align the skin at time T with respect to the scan at time t minus 1 we're just kind of locally just the posts so that the scans overlap best we can do this from time step T minus 1 to T or we can take let's say last 20 scans and Orlando I mean you want to get the last 20 and ignore then the t minus 20 scan with ID t minus 20 and then kind of maintain always the last 20 scans and mechanical alignment that actually we works quite well this was exactly the idea of scan matching that we said so far so the the exact answer is exactly the same slide we discussed weeks ago so what we're trying to do at every point in time at time T we try to maximize the probability that consists of the observation model so the current observation at time T the current estimate time T and this is M T minus 1 should here if like the map built up to time T and then this is the odometry model so give them the best post we had at the previous point in time one of the best posts at the current point in time and then we try to find the XT that maximizes this probability it's just kind of you can see this really as an alignment of the of the new scan with respect to the Matt built so far and typically this map build so far takes the column as in the last 20 scans the last 10 scans I saw the standard way how scan matching is done and if you do that you just have the example so this is what the uncertainty of a particle folder looks like by a robot traveling in this case around 5 meters for 1/2 meters straight and then doing a left turn so this is a particle distribution that I obtained using raw dhama tree if I use scan matching I typically get more accurate you can see this year by the blue particle clouds because the the the uncertainty is much smaller because we take into account the observation as well as the odometry information to incrementally optimize the pose of the robotics so as a result of that if you take the blue distribution instead of the red distribution we could argue that we actually do better and so this an example is just applying scan matching the map that this approach builds and this is the same data that we have seen before so we can see here and as soon as the robot reach traverses parts of the environment they are our misalignment so this is kind of three times with the things the same place but always having a slightly wrong pose estimate because the robot only aligns its current scan against the last 20 scans so this is just an incremental approach but as we can see here there's an uncertainty list there's obviously an error in the map but the error is much much smaller to what we had before so if we take this as an input to our algorithm we are expected to actually perform a much better job than we did that beforehand and this was actually an approach which was proposed by the camel in 2003 that uses scan matching and as kind of a pre correction step and then run fast m14 grit Maps you have one small problem in this approach if you look to that from the mathematical point of view that means if we take the particle filter algorithm for frustum one as we discussed it it said that as a proposal distribution we use the odometry motion model and then to update our weight we used our observation model so now we using the observation already in the proposal because there the scan matched input and that in turn must change our weight computation so we do something although the result looks decent we will do something mathematically wrong if we apply this approach directly so the kernel found a nice workaround for that problem and his key idea was okay let's not scan match the whole input let skin meant only chunks of the input so way to say we take our or lock file or data stream let's say let's scan match always blocks of let's say 100 pulses so scan match 100 pulses let's came into the next 100 posters the next 100 poses so I you can see that as locally consistent or decently built mats and then he used the particle filter to only estimate the jump from chunk 1 to chunk 2 from chunk two to chunk 3 using the standard fast Lim approach you can see this as kind of building small maps and aligning those small maps so if you see this into look into the graphical model how this looks like we take K so as the junk chunk of size K or K minus 1 here observations and odometry commands and use this perform scan matching to estimate how we go from X 0 X 0 to X K so we do scan matching the origin just do one step with our with our current odometry information and so this is kind of this Adamo tree u prime so we take our scan Metro Dhamma tree and then one real odometry ops real odometry and add on top of that those guys is 2 X K and let me just take the case observation so the one that we obtained in this last post and apply the particle filter algorithm so we kind know if you can see this as not taking a single observation but combining many observations into one local map and then applying the particle filter only to go from one local map to the next local map in this case you still do mathematically the right thing because you just estimate with a particle filter the posis of XK with respect to X 0 and X 2 K with respect to XK you just have kind of you have much less steps that the particle filter actually has to perform and there's something which is called mapping or for our black wise particle filtering with the improved odometry and if you do that you actually get pretty good results of a small example over here so this just now shows only the trajectory estimates are not drawing map the robot started here drop around is now here shortly before the loop closure what happens is now a loop closure or cure so all the particles reenter the known part of the environment and as a result those particles which are where the current trajectory estimate is in line with the previous trajectory estimate they will get a high weight and the answer will get a low weight and so the one with the low weight will die out this actually happens here so you can see your two hypotheses survived and then the robot actually continues navigating through the environment and this year this animation only shows the the trajectory estimate so you can see how the robot when the robot dry move through the environment it so unknown parts it gets more uncertain whenever it reabsorb something it has seen so far it gets more certain and this is an the resulting map that we observed who this was kind of the loop closing point that we have so this is again the same animation about just showing in the background the map of the most likely samples so therefore the map in the back of always switches off the switches because I said this is the map which corresponds to the most likely sample and using this approach we can actually build decent maps for medium sized environments with a number of samples that we can actually maintain in our memory and actually use this approach so this was kind of the the first solution of particle filter based slam for building grip maps but but their camera in 2003 so took the ideas of fast line one combine them with kind of building local maps by skin matching fuse both together and then came up with a working system you still may see that as kind of an ad hocs a little bit ad hoc solution because we're kind of artificially have to built these chunks scan merge those chunks and then only use a particle filter for getting those chunks is kind of a little bit you know to make it mathematically correct it was kind of a little bit of a workaround for that and the question is can't we do that better can't we do a more fundamental of not fundamental approach but starting from the particle filter itself and improving the way the particle filter works in a mathematically sound way in to come up with a better approach and this is the idea which might mellow or originally introduced with fast m24 went mercs and this is here is a variant how we can use this for grit maps so what we do now we move from fast m1 to faster them to but do it here in the grip map and base case and the key idea is we kind of very very briefly discussed last time is to use a better proposal distribution so use the current observation in the proposal distribution of the particle floater to do kind of this scan matching not as a pre-processing step but inside the particle filter when drawing the next generation of samples and then correctly taking into account the observation or the fact that we use the observation in the proposal distribution when computing the particle weight it's kind of the kiya tears that we look into here so we have this kind of improved proposal II a very very very briefly sketched last time so what we do in here is in order to estimate the pose of the case sample at time T we draw that from a distribution which takes the full trajectory of the of this part of the KB for the odometry and the observations and tries to estimate XK so the important thing is that we have the most recent observation that K in here at that teen years or and this was the main difference to the previous proposal distribution that we had that we explicitly take into account the observation and this is especially helpful if you have a sender which provides you or which you can use to compute a pretty good local estimate like a laser rangefinder which gives you proximity information to obstacles just by aligning them by doing scan matching as we have seen before we get quite decent incremental trajectory estimates and if we have a sensor with this properties then we can actually use this in order to come up with a more accurate sampling strategy that means the uncertainty of my proposal distribution is smaller and concentrates on the meaningful areas of the state space another result of that we need less samples and the to be more efficient if we use this approach while being still highly accurate because we sample only in the high likelihood areas of our state space okay let's have a look to that so this is this is what's kind of the optimal proposal the only difference that I did already here I kind of integrated the the previous previous posters previous observations and dhamma terry commands into a kind of map built up to the current point in time so this is kind of the map estimate of the particle number i up to time t minus one so the index t minus one is kind of that's not shown in here but it's kind of the map built so far by that sample so what we are now trying to do we know the poles of the particle at t minus one we know the map builds so far we know the current control and the current observation say where should we end up with so what we can do is we can apply basically Bayes rule and end up with this expression over here and these are well known terms on top of years so this is the observation model that we already know that we for example you use in localization and this is our odometry model right we have the product of those two distributions and this is where we want to end our ways and one of the key insights now in order to build an efficient algorithm and said say let's inspect those distributions how do they look like and if we look to these distributions now assuming the case that we have a laser rangefinder we can actually see that for the rangefinder to get you actually get pretty good local estimates so the gate can quite well align two scans which are recorded at nearby posis align them and come up with a substantially improved pose estimate and so this is a very peak distribution typically and the other hand odometry has a rather flat distribution occurs just by counting the revolution of the wheels taking to couch slippage taking account inaccuracies in our system different properties of the ground store face makes you get a rather flat distribution so we can actually argue that this term over here actually dominates this product at least in the area where the robot actually can be from given the odometry information and this is one inside we want to exploit now okay so what we do is we again it is our proposal distribution let's call this term over here and towel just then the the masculine met but they'll taking it a little bit easier so whenever we have tau of Eckstein here it means exactly this expression over here which is the product of the observation likelihood and the odometry model okay so let's look into this term down here this term down here is the this is the current observation this is the previous post that's important the mat and audiometry information so what missus over here is XT if we compared to this expression over here so we are missing the current pose estimate so what we can do is we can actually K that's no problem for us we just integrate over all possible poses and multiply this expression with the likelihood that this pose actually occurs so then we obtain this expression over here and this expression is now so the older expression over here just dropping a dhama tree because I don't need it if I have XT in here times likely that XT actually occurs and here the odometry in the previous post or in again so what we have now here is a six six this is exactly our term towel right so the expression that we have here is exactly the same in tau except that we have this integral need to integrate over all posts XT so we can rewrite this proposal distribution actually in this in this compact form over here and this is something we will be able XE exploit very soon so we have that's just repeating what was written before just expanded so this R tau this is our integral over tau integrating over the possible posted that the system can be and on it say okay let's let's investigate a little bit closer this expression down here we already said before that the the observation model typically locally limits or gives us a locally peaked estimate that means if we roughly know where we are by aligning two scans we typically get a pretty accurate repeat distribution on the other end Adamo tree you said okay it's rather flat distribution but it limits us globally in the environment that means if I know I'm let's say the robbery report I was moving five meter forward it's quite unlikely that I'm teleported away a kilometer away something which we typically can exclude however so this is kind of a global limit so we have a maximum vision that the robot can travel if it reports the distance although we use the Gaussian distribution of this Gaussian can be assessed infinitive Li longtails in reality the robot is not teleported away somewhere else on the other hand we may have two rooms which look exactly the same so if I would only optimize for this expression over here it could be that throbber is cruelty in this room over here and the room next door looks exactly the same because they're exactly the same lecture hall the robot would immediately get a bimodal distribution saying I'm either here or they are both fits perfectly that's definitely true and therefore we have to exploit the product of both terms understand the properties of those two terms in order to come up with an efficient algorithm and the thing is so this is kind of a flat distribution mode which has a single node and typically it doesn't go to the very far tails of our distribution this term over here can have multiple modes but every mode is typically very very peaked so if you just sketch that so the observation model is the living kind of the blue guy over here may look like this so we have multiple of those Peaks but we have in the the the green plot over here gives us a local limit a global limit sorry so it says okay we are here in this one B we can't be here because there's something that Adama tree forbids so the product of both terms is actually we only need to actually consider let's say this red rattly sketch part so we can say okay given that this term is basically zero it's close to zero but let's say I assume it is really zero in all pulses which are far away from this estimate let's say which whatever is Six Sigma away from the mean of the odometry that we have so we can say okay we don't need to integrate over all possible pulses about all positions in the environment we just need to integrate around the local area of the mean of the odometry estimate they say 6 plus minus 6 Sigma or something like this so we can be pretty sure that we are within that area and then but within that area the product the product is typically dominated by the observation model and still both models contribute but the the observation model kind of dominates the product over here so the first thing we do is we we say okay we can approximate this term by it's really more or less exactly the same if we don't integrate over all possible pulses but only those pulses will it say this expression is bigger than epsilon and this is kind of the local neighborhood around which comes from the odometry estimate so that's not what we have sorry this time over here this is just a repetition the definition of town both everything on the same slide so what we now need to do we need to kind of find where our V approximatively and then integrate around this local area over here and so the question is how do we actually sample from this term the problems we don't have it in closed form especially the observation model which we can only value a point wise because we could say for every point how well do the scans align we can evaluate the point wise efficiently but we don't have a closed form for that how can make sure you efficiently simple from sample from that term and what could be good at years to sample from that term we have something which we don't have in closed form but we want to build a proposal distribution sample from it so they've a term before function we can evaluate it point wise and we say okay we know in which area will be likely to end up because we know everything outside this local area is zero and we can point wise evaluate it one way to do is we can just draw a couple of random numbers in that local area evaluated point wise and then compute the galaxy approximation of that all right because you can draw efficiently from a Gaussian distribution so there's something what I'm doing I'm taking this term over here and want to compute a Gaussian approximation out of that and then use this Gaussian approximation of the proposal distribution okay how does it work we have this term we say okay let's approximate this term by a Gaussian distribution let's assume this is how our distribution looks like in reality so it's not really a Gaussian right it can be maybe not too far away from a Gaussian but it's definitely not a Gaussian distribution so the first thing we can do okay we need to find this area Chris we don't know where the robot is in space so we need to find that area the first thing we do is we have a pretty good estimate from our dama tree so let's perform scan matching so we just do a kind of a gradient gradient descent in the in the error function or if you find the maximum of the of this of this probability distribution okay the scan nature told us okay we are somewhere here that's kind of the mode we then do is we do exactly what I said before we just draw some random numbers in the local neighborhood of this term of this of this point and then evaluate those points point-wise under this function because I can point wise evaluate this function okay I draw these samples well maybe I just take a local grit it's also fine let's take a take a local grit around the the position found here and the size of this bubble over here a plot just gives me how high the well you was that I obtained by evaluating this function so these are points which are sampled around the maximum reported by the sky whether they can do is they can take those weighted samples and just compute a Gaussian approximation based on those weighted samples again this is also function of exactly the same but they're not too far away from each other and we're thinkin do I take this Gaussian distribution I draw samples officially from this Gaussian distribution and use this as my proposal distribution for my particle filter implementation so what I did in the end here I was kind of using so before we had just the odometry motion model the key trick is arcaded i also want to integrate take into account the observation the most recent observation so I took this most recent observation integrate into the proposal distribution I did some transformation of that came up with this term towel with all this is Tower divided by the integral of tower over integrating over all poses then I said okay give my specific sensor properties I said there's only a local area which really matters where this distribution is above zero so I need to integrate over those area you know over this area and it was exactly this term over here that's what okay I won I need to have a form which is suitable for sampling so let's approximate that by a Gaussian distribution and then use this as my proposal so I have a proposal distribution from which I can sample efficiently bickerson okay Gaussian end which takes into account the observation and the Adamo tree information so the first next step I can do to kind of you know integrate the whole idea of kind of skin matching or scan Malamud directly into the particle filter without having to have this workaround once I did that I computed these these dots and kind of so this was our function tau over over here I can actually say okay we can actually also into use the sample points very efficiently to approximate the integral over here and oh no sorry this was I was too fast for the first step sorry so what I want to do is now want to compute the the mean of the covariance matrix of this Gaussian distribution so how do I do that I said that we take those sample points and compute gaussian approximation based on this sample point this is exactly what you see over here so the exchange drawn around the maximum report by the scan mature and then we evaluate it under tau and this gives us my mean and my covariance matrix that that that there are the parameters of my pro of my of my Gaussian approximation of my function tau so i approximated this function tau by a Gaussian distribution and these are the parameters of this Gaussian distribution is it clear to everyone how these how these equations where these equations come from okay so just to repeat that there's no black magic around that so we wanted to compute a Gaussian distribution of towel what we said okay we we don't have a closed form of tau but we can point by evaluate towel and use the scan matching we know where the mode is of this distribution so we just sample point around the mode or just evaluate it on a grid pattern for every of those grid patterns we we compute the value of this XT under tau T so this is tau T and then use this to compute the mean and the covariance matrix and this gives us a Gaussian approximation of tau okay so how do we need to compute and of course what I also can do is if I have those points I can also approximate the integral of quite doll I just need to sum over those points so this comes for free if I let's say take a grid pattern over here I just can sum over those values if I have a uniform same uniform spacing of the the grid just lead to some of those points and have an approximation for the integral so that's also easy okay so now we have a proposal distribution which takes into account all the important information that's great that's what we wanted to have but there's something else we need to do the next thing we need to do is we need to say okay how do we now need to compute the importance weight because the important weight was given by the proposal distributed by the target divided by the proposal distribution and what we use in here is exactly the same term we ended up last week was fast LEM won how do we actually compute the the way its target by proposal and if we take into account the optimal proposal distribution this is the resulting weight that we get so we have new way that's kind of the old weight or if I carried out resembling this is always said to one divided by the other so it goes away and then I have the observation likelihood the only thing which I'm missing right now here is the current pose of the sample so by taking into account the most recent observation the computation of the weight changes so that compared to the expression we had before I'm missing XT over here this makes it harder to evaluate this term however that's not a big problem for us because we can now apply exactly the same trick we used in the for setting up for coming up with our tau here so we use exactly the same trick we used in here the course this is exactly the same expression right see this term over here is exactly the particle weight so we do exactly the same trick as over here and come up with the integral over tau so let's still do the same thing over here so okay yeah spin it to this expression integrating over X T so the odometry motion model is now in here and we get back our XT which we need in order to efficiently compute the observation likelihood so this now is exactly our towel nicing so this is the integral about tau so that's exactly this term over here which is okay that's great we just computed that term already computing the proposal distribution right we computed this integral already so we can just take can just sum over those points XJ evaluate it under Tao which we used to compute our proposal distribution so computing this guy over here oops sorry so this expression by which we divided the our Gaussian proposal distribution is already the weight of our of our of our distribution over here so these are the points samples around the maximum of the scan Metro there's no XJ we evaluate this point and sum over the K points we have drawn so this gives us an important weight for every particle and which is which directly result which is which doesn't require any overhead in the computation because we computed this term already before in the proposal distribution okay so to summarize what we have what we have seen now so we have come up with a new proposal distribution we won't keep following ideas that existed before but kind of specially adapted here for the for the grep map case where we say we take into account the most recent observation to come up with a better and informed proposal distribution and given this proposal distribution there is a way we need to compute the weights and then we can actually run our standard particle filter approach except that we draw from a different distribution exactly from the proposal I've shown here and the weights are computed in that different way and this is how the improved proposal looks like because improved proposal now takes into account the maps built by the particle so far right because takes into account the map build so far and the current observation so it can align to the structure of the environment so consider example number a overhears here the robot moves through large free space so it doesn't get any observation and say all there's no reflection no object that reflect the laser beam so it simply gets zero velop me readings in this case the P of Z given XT that's just the uniform distribution it doesn't tell me anything where I am as a result of that I end up with the odometry model automatically because if I don't of any observation the best thing I can do or no observation that I can exploit the best thing I can do is I take my odometry information and I come up with this kind of distribution so the tipping a banana shaped distribution if I'm driving in a corridor where I can't see the end of the corridor that means using my sense observation a kennel the robot can align itself very well to the right and left hand side because it knows of the distance to the wall but there's no idea or as the high uncertainty along the main axis of the corridor so the estimate along the main axis of the corridor is just what results from the Adamo tree information so it's kind of just uses odometry in this dimension and you use the observation in this dimension it's also comes out automatically it's not that kind of there's an if-then-else statement in there but just by SD the sensor information does not allow me to estimate where I am along the main axis of the corridor in this dimension the uncertainty of the Gaussian distribution actually bounds this and if the robot reaches the end of this corridor so you can see the end of the corridor actually all samples are very very concentrated in one spot because the robot can exploit its full sensor information as well as the odometry to estimate where it actually is so it's nice it's in it's in proposal distribution which aligns itself given the structure of the environment and given the most recent observation so if you have enhancer so if you don't see anything there's no by the way we can do then sampling the sampling from odometry if I'm here this situation I don't waste any sample sampling to the right and left hand side as your standard odometry motion model would do simply only long the dimensionally in this case our only sample very very concentrated again in the area which really matters so the the proposal distribution adapts to kind of the needs that the system has here which is a very very nice and elegant approach elegant prose very nice property so if you can actually exploit that so are there any questions at the moment that I can hopefully answer nice so just to kind of summarize that so far we started with having fast m1 not taking to count observations that did not work then with next trick you scan measuring is kind of a preconception goes tanks that gets much better against a still working system but still I have kind of this not very let's say clean particle filter based formulation so let's try to take this ideas and integrate everything into the particle filter into the proposal distribution and then in every step I have the good proposal distribution and come up with these estimates of the robots pose at every point in time which I can nicely cover with samples in those situations and then come up with this approach over here so that's kind of the first key improvement I need to do if I'm heading for building grip maps using a particle flow based approach system and there's another thing I can actually improve in order to get a nice working system that's kind of let's let us inspect the resampling step again so consider that we have four particles over here and in every step particles typically die out through through the resampling step i'd say in every point in time this is T time one time two times three times four actually lose one sample just in this example so in the worst case which can happen that after four steps of n particles n steps you if I lose only one sample at every point in time typically we lose more often we lose more all the particles which survived a time step T equals four stand back from this single particle so if the history is longer all the diversity is lost over here because they are all stem from this particle and so as a result of that the resampling step can eliminate samples and lead to the fact that a lot of particles in the current estimate just stem from a very small number of particles if I go back a few time steps and that is problematic because especially here in in the slam context we cannot easily recover from that because even if one of those samples by chance goes back to the to a good pose as could happen in localization easily here the path trajectory estimate would still be wrong the map would still be wrong so that's something which which hurts us in this particle filter based approach so the the idea of this improved resampling technique this low variance free sampling which I which are briefly discussed in the particle filter based lecture that helps to reduce this effect because at least if all those samples have the same weight none of those samples will die and will be eliminated but still this effect still exists there just its bends over slightly longer periods of time so what one can do is let's say let's let's perform only but it's called a select selectively sampling strategy let's look to the distribution of the particle weights and only if the particle weights differ substantially let's carry out resembling step that means just because a particle just one observation doesn't fit very well to a sample is that immediately eliminated or dies out very likely dies out and resampling step let's keep them for a little longer periods of time and then only Condor the resembling step if the particles differ substantially in weight because if they differ substantially in way to say okay one sample is substantially better than another sample so it makes sense to carry out the resembling step but if all perform roughly the same don't perform resampling at every point in time because the likelihood of of eliminating particles which are not too bad it's actually quite high and this is something which is called particle depletion or particle starvation the party will simply starve because they are resample too often there's one problem that we can encounter so the questions when should we resample and one approach to do that is just look to the kind of as a measure which is related to how much do the samples actually vary how much do they differ this is something which is called the number of effective particles and this is so I just take into account the squared weight and I sum that up 1 divided by this value so this is a value which ends up if which which gives me a value between 0 and the number of samples if the samples were to be normalized before and the higher the number is the harder it is for me to tell which sample is better so I've all had the same weight I get the maximum value and it's just a one particle which has a weight of 1 and all others are 0 I get a free weight of 1 so not look between 0 and n but on one and n so the smaller the number is the more important is to resample because most of the particles have a bad estimate the higher the numbers the worse that is and one way it's kind of the most easiest way to do this k let's simply set a fixed threshold whenever this number of effective particles drops below a certain value I actually conduct the resembling step if this is not the case then I I simply don't do anything so if I have a high value I don't do anything otherwise I perform a resampling step let's say if it's n half if it falls between 1 and n they're approximately in the middle if we're higher than that we say no need to resample at the moment otherwise let us resample so there's an example over here which is an interesting environment with two nested loops so the robots started down here drove around here over here then spent some time in this internestor loop and then closed the big loop and this is one of those situations where particle filter applications which don't maintain a substantial diversity in the history failed to close the second loop because by traveling through this loop multiple times all the particles a lot of particles died out and only to begin one sample or two sample to different trajectory estimates maintain what we can see here is a number of effective particles so we start here this was kind of 30 particles we start with 30 so the value decreases over here this is where the robot goes down here it goes down here and then it closes one loop this one this is around this time step so here the well it would drop below 15 and then resembling step is carried out through this valley bumps up to run through D again and the robot traverses this inner loop so value decreases and then stay smallest constant because by just driving around here through what know the environment already there is no particle which does a particular better job than the other particles and then at some point in time the robot leaves this in a loop and closes the outer loop and this effect where this happens again there is okay a lot of those estimates ahead here actually in suboptimal ones but just a few 1/2 or 2 or 3 in this case kind of 6-7 did a good job and then it makes sense to resample again and eliminate those samples over here so this was kind of by the robots visiting new areas so likely be these this value slightly drops when you have this loop closure we have this we have to be this big bumps because when we close loop there was typically able to say quite effectively ok this was a good sample this was a bad sample and if we revisit already known areas to be not that much happens because this I don't have that many means to estimate which particles is better or which is worse and thus he exactly was a second loop closure done here okay so there wasn't kind of an easy add-on to the particle to the to the particle filter implementation this selective resampling but this helps to maintain diversity of samples so this now as a result of the the Intel Research Lab this was the data set which was completely screwed up in the beginning just using a small number of samples this time here 15 samples and you can see here all these inaccuracies that we have seen from the scan matching or from the original approach now have been gone completely so it's a rather clean map so it's going to one centimeter great resolution and shows a rather accurate estimate of the environment for this for this data set over here so there's at least no visual inconsistency that we observe over here also have a small video how this approach runs so the robot moves around you can see that a particle diversity is maintained when the robot closes a loop it can perfectly say which samples did a good job and a bad job so it's still a few hypotheses maintained so I'm whenever the robot walks into a new room kind of you can see that the the uncertainly spreads out a little bit and whenever it comes back it partially maintains this because the threshold has not dropped below and half for carrying the resampling step but at some point in time you will conduct resampling step and then a lot of hypotheses from the past trajectory actually died out but you can also see here compared to the first approach where we have seen the particle cloud is much more concentrated so we don't have this big diversity of trajectories that are generated the reason for that is that the particle filter only draws the samples in the areas which really matter so we have a much more concentrated particle cloud around the areas of high likelihood there's another example so this is our campus so we are sitting here corlane 101 that's the link 79 where we are sitting in the parking space over here building 106 51 52 the Mensa building and this thing is all the vegetation and trees and everything which was which is sits here in front of our building 101 and one of those examples built in this case was kind of 30 samples rather long trajectory was nearly 2 kilometers there's another famous data set which is the Keeling curve dataset also called the infinite corridor data set recorded MIT named the group of John Leonard and this is an environment where you have a lot of long hallways and even so you can pass between buildings through those glass areas which are pretty hard to match for skin matching approach because everything is glass you can actually walk through several buildings without leaving the buildings which is great if you do with robotics approach so just a few pictures so this is the main building you can see here this actually sits over here so this long corridor is actually this small corridor over here so you have really long corridors in there and so this is a small video which shows how this approach works kind of overlaid with a satellite image and between scene here's so red is always the most likely trajectory if the particle cloud spreads out move through an unknown environment partly cloud spreads out and revisiting typically the the sample set the particle filter can identify which particle did a good job which did a bad job and can eliminate those sometimes this decision is a little bit delayed so you see the robot reentering moving around a little bit and then the resampling is conducted now this happens now very soon and this is an effect of this selective resampling strategy that it doesn't resample all the time and so you can see now here how in the end we end up with a pretty decent estimate at least kind of seems to be globally also consistent if we kind of at least roughly overlay this with a satellite image although these are kind of non auto photos which are exactly aligned over here and we don't know exactly where the corridors in here but at least if we align some of the of the rules that we see with the auto walls that doesn't seem to be a too bad estimate and there's kind of one of the say most advanced particle filter based systems that you find for building grip maps and they actually motor systems or robots which you use a very similar technique it's slightly different they have different sensors set up but uses something very similar so this is a Samsung Hudson robot and you Hoover probably smell not that new anymore is kind of one or two years old almost a prototype at that time you can see it's still some wireless antenna and stuff resistors in here and the important thing is that this guy actually builds a map of the environment using a robe like last particle filter and in order to build a map in order to know where it's space station is so you can actually drive back in it's map so this is what it does right now so this is a place it found with the mapping approach and now it kind of perceives its station doesn't let's say last small pulse correction to find its docking station in order to recharge so that you don't have to collect the robot which ran out of battery power every evening you come home so this guy if he uses a map the first most American clean and second to actually find his his home location in order to recharge himself so that's a technique which actually also made it out of the research labs now in commercial in commercial products however I want and don't want to let's say I don't to stop here also wanted to report about some of the disadvantages that this technique actually has and how to fix them the first what do you think is a main limitation of the approach that we discussed so far in this kind of building the proposal distribution step so what assumptions have we done when computing this proposal is tribution I'm aesthetic yeah so we still have a static environment assumption in here that's absolutely right that isn't clearly an assumption and but it was more a little bit more technically down to during the derivation of this proposal distribution we made some assumptions and also explicitly stated them we so what are what are the key assumption that we did in the arriving this proposal distribution to efficiently draw samples from exactly so this was one of the assumptions that we did we say okay so we assume that we have an accurate sensor which gives us which allows us to do scan alignment accurately that's true that's fine that's kind of a physical setup that we assume the robot has I'm kind of often fine with such an assumption if my robot has I saw that so this guy here then this assumption is violated so I need the engineers of Samsung may quite likely have invested some time to actually coming up was a different sensing modality and but still they can do that but there was kind of another assumption that we explicitly made in here which mathematically which may hurt us there's nothing bad about using measurement as a proposal distribution that's both absolutely sound to do that it has something to do with that but that was not the the reason which assumption have we made in this context exactly yeah so this term towel we approximated by a Gaussian to efficiently draw samples so what happens if my distribution looks like this so this is actually a real world distribution obtained with the row bottom in cluttered environment and this is if I kind of on a dense grid point wise evaluate tau at every point in time and compute the exact values so this is here for printed for a fixed orientation but varying in x and y so in reality this is a 3d plot but that you don't then you don't see anything anymore so we take this plot y n approximate this by a Gaussian distribution that can work out and this is in reality the really good match we may get a Gaussian but you send it around here so that should work but what happens if this is the right peak or this is the right peak or we concentrate around this peak and in reality of this peak and we are screwed and that's one of the problem that we have especially in case of loop closure or cluttered environments we may have situations we have this assumption that we say we have a locally bounded area from the odometry where we can be and they have a very peak distribution from the observation model where we can be and this is just one peak in this area from so kind of the observation model just generates one peak within the area that makes sense from odometry this is the case we are fine but if this is not the case and this is clearly one of those cases we have a multi-modal distribution then this approximation is a poor approximation I mean still we are drawing samples from this Gaussian distribution so maybe by chance we have a few samples which end up at the right spot but maybe not so we can be lucky but we don't have to be lucky and so the question is how can we handle situations like this and actually how often also question often does this actually or cure and so what we can do at least for for evaluating that let's take those datasets and simply take the area where the robot can be from the Adamo tree point of view and that's simply that point wise evaluate all possible states that takes ages so it takes hours takes days to do that don't care let's say this one of the right thing let's see how far our gospel approximate approximation is actually away from this optimal solution or from the from the right way of doing that so we can use something like statistical testing which you prefer it in your statistics course something like the Enders and darling test or some other test to test if the distribution is released approximatively a Gaussian distribution and if we can thing we can do is we can take the exact distribution we can take our Gaussian approximation and then using something like KL d or some other way for estimating how how much do these distribution differ so all they're kind of it has one of the distributions high values where the other distribution is low well I'll use at some point for example we can actually do that with a couple of data sets that we've all seen here and evaluate that and say how often does my so difficult test tell me yes this is a Gaussian distribution and often does it tell me now it's not a Gaussian distribution so this in this number of cases the system said it's a Gaussian distribution so between 75 and 90 percent roughly in those cases we own account everything is Gaussian it looks perfectly fine the statistical test said I'm fine that's a Gaussian that's good if we're in a non galaxy of all which are these two things we can say okay let's inspect the difference between our approximation and the exact solution in more detail and she turns out that quite often be a non Gaussian but we have a single mode like whatever we have more or less the uniform distribution in one dimension but it's kind of bounded like a box distribution with statistical test says no that doesn't work well but those single modal distributions can often be still approximated kind of well by a Gaussian distribution so that doesn't hurt us too much the more tricky case is this case over here where we have explicitly multimodal distributions so why we really flaw on that and again this is comparable high number so in this case between whatever three and seven or three and six percent that doesn't mean in we often screw up actually in most cases we still sample particles in likely areas of the state space in the system as we have multiple of those samples I'd say 100 samples there's still a chance of samples do a good job and if we can let's say after a few time steps identify which sample was did the right job and which one did the good job we are still fine but in some situations we actually flow and this is one of those examples so this this plot I showed you actually comes from a real physical situation but analyzing why did this approach did not work so what we see here is actually it's a map of the computer science AI lab at MIT and so this was the corridor and he was kind of rather cluttered left space and in this position over here the particle filter simply committed to the wrong mode so it computed the gossan approximation around the wrong mode which seem to mesh pretty well at the moment but actually failed and so we got an orientation error here in this post and then we kind of you see that this part of the environment should actually be this part of the environment so it's kind of these are obstacles or corridors which don't exist in reality or those two corridors are actually the same so in this case the distribution was clearly multimodal but we used a single mode to approximate it and that caused the system to diverge the questions how can we fix this we still have the assumption that we say okay we have this kind of bound from our odometry so we can't be teleported somewhere else so maybe it's not a good idea to start let's say take what you typically do this we take our our dormitory estimate and then perform scan matching and then we end up in one of those modes but it's quite a random in which mode we actually end up actually end up in 3d in the mode which is closest to the mean of my odometry which often is a pretty good approximation for that but in some situations as this situations this doesn't work out so let me illustrate the how that typically works so let's say this is our distribution in reality so we've mode 1 and mode 2 well the standard approach does okay it takes a dormitory over here and then performs scan matching so if I perform scan matching from here so well it how does the value loop to the right to the left and then I kind of optimizes so and not actually in this mode over here so what we do is we we approximate this mode by a Gaussian distribution let me draw from that so the Senate approach which generates sample somewhere in this area that's okay but what if mode two would be the right thing and we're we actually flawed over here this depends on the let's say this was the uncertainty of its results from odometry which does tell me I'm somewhere in here and just by chance I ended up in this mode number one so better strategy that we chose let's say let's say first draw samples just from odometry so all samples down here so we cover the likely area and then perform scan matching for every of those guys and see in which mode they end up with so the rat ones will actually end up in mode 2 and the blue ones will end up in mode number 1 so what we have in here now is the multi modal distribution so we say ok we've got different rates substantially different results from our scan Metra what I can either do is I just say ok just take the number of samples which ended up in this area the number of samples which ended up in this area and then just draw 1 the likelihood it's proportional to the number of samples in there and then commit on one mode so some samples will choose the blue one some samples will choose the red one and then we have hopefully those modes covered and if we do that actually that works very effectively so here these multi modal distribution is still there but it will simply put some samples in here some samples and here some samples in here that approach run and see which one's the lives and this actually helps me to come up with an accurate estimate we can also estimate what is the errors of those of those distributions what we show here is 4 because these 3 different data sets which takes this the Gaussian proposal that we had before what you see here is a KL D value so a value of 0 means both distributions are the same the higher the value is the larger the difference between both distributions and it turns out here that so we have a lot of cases there's basically no error and in some case we make a small error small error and this value here is actually should be 0.4 plus so it means point 4 and everything bigger so we have a higher aprox arrow to do from time to time in this case these are this five percent so these are this multi modal case that we approximated by her unimodal distribution if he takes this two-step sampling process so we first draw from odometry and then to the scan matching for every of those samples we actually reduce this error here to a really small value and actually this happens consistent three consistently through different data sets so the big approximation errors we do are gone they're not completely gone so this value is still less you probably don't see it but down here it's a very very small value but so they're still situations where we don't cover that well because let's say we don't have enough samples or they all converge to the right mode and didn't cover the the second mode which was the right one and it's a sampling procedure there's a chance to make an error and some small remaining errors in there but it's substantially smaller to these errors we actually have seen over here so this two-step sampling procedure is actually a very very efficient way of covering even multiple modes so also we have this gaussian proposal if we generate multiple hypotheses where this Gaussian is and then we do a local gaussian for every one we are typically pretty good off so it's kind of a multi modal or some of gaussians what we actually do just separated or distributed over the individual particles so this two-step sampling procedure allows us to better cover the situations where we have the multi modal likelihood function so we're tower's the multi modal distribution that we're just approximating this by a single Gaussian distribution is not a very good solution so if you have unimodal cases we obviously end up with exactly the same result maybe a little bit more costly but actually doesn't really matter it's not the bottleneck in here that we generate more samples to look to which mode which they converge and then if they're all conversions the same mode we actually have a unimodal distribution and so it's a minimum computational overhead so just to summarize this Gaussian proposal yes and no because it's also something which can give you an idea why maybe techniques which rely on Gaussian distributions all the time are not the best idea see in most cases the Gaussian is actually a pretty good representation of what happens so we said before in up to 90% of the cases it actually works is it is a Gaussian no statistical test says it is a Gaussian another around five percent of the cases go up to five percent it actually says okay it's not really a Gaussian but it has a single mode maybe the game it's not perfectly a Gaussian shape but it's the Gulf approximation will not be too bad but there are a few cases let's say around three to six percent in all the data set analyzed here all that they actually test it it turns out that there is a substantial difference and you can be lucky that your Gaussian approximation by chance come out of the right mode and quite often you're lucky but there are situations where you're not lucky and then your tree completely floor if you don't have any way to cope with multimodal distributions in here okay so just to wrap that up or if seeing here is or presented days first fast LEM to so the idea of taking into account the most recent observation into the proposal distribution to come up with an efficient algorithm for building maps with particle filters we did and especially in the context for grit maps because we're interested in actually building grit maps and these idea of using an improved proposal distribution it's actually very similar to doing scan matching but now on a per particle basis so we don't do this scan matching beforehand building lists chunks of maps it's very similar to in doing this scan metric on a per particle basis and this is a result of the fact of mathematically in a mathematically sound way integrating the most recent observation into the particle filter and here especially into the proposal distribution and the same thing was actually that this resampling selective resampling strategy actually also helps to maintain a particle diversity as I haven't shown that on a lot of plots in this case but that see one easy way for allowing the particle filter to maintain multiple distinct hypotheses over more extended periods of time and allows me to efficiently come up with solutions over here we are now also at the point where I would like to close the particle filter based approaches and also one step where we kind of leave a little bit the world of the probe Lissie robotics book because this part is not part of the prophecy robotics book anymore because the developments here newer so these results are from kind of 2005 to 2007 and the other thing is that also the things we do in the future in the reign of the course looking into draft based slam approaches and least square error minimization to solve those is something which I'd like to explain slightly different as it's done in the book therefore the parts of us were covered by the book the part which we do in the remaining part of the course is partially covered by the book partially introduced in slightly different way but there's a lot of online resources that I put on the website either original papers or tutorials which also give you a chance to say reread some of the topics that will be presented doing the same last part of the course so we're done as particle filters right now and from next week on next week on sorry not next week but from January on we will actually look into graph based approaches to slam so it's kind of a third paradigm that we investigate and which is today considered as kind of the most successful one or where most approaches that are published I'd say after 2008 maybe something like this related online most of the new systems or modern system use graph based approaches because they have some nice properties they also make some assumptions again especially Gaussian assumptions which you can also try to get rid of but they have some really nice properties that we are that are really useful for building those maps and we will dive in very detailed into those approaches twenty different techniques in there so that also the LA paradigm will be covered intensively actually until the end of the course we'll look into that in the end also looking a little bit into data Association or how to build complete systems not only the engine in the back which does all the estimation process but also how to come up with aligning observations and techniques like that if you just whenever look to that there's actually not a source implementation of this approach which I presented he available it's called G mapping you find an auto slam or other open source repositories just Google for it I have two main say that the code is not maintained any more than 2008 so I'm actually not sure if it still compiles or requires qt3 actually some you pro need to have an VirtualBox was an old Ubuntu to actually get that running in case you want to try that but it's available under this URL way actually finds the source code which is available for that system in case you want to try that out or I want to dive into that that's it for my site for today are there any final questions which haven't been raised during the presentation okay so then there's a new mapping sheet online it's also a product here which asks you to implement actually fast Lam providing some of the functionality knocked off code and use the time over Christmas maybe I still have the full week strongly recommend you to actually work on that and get a complete system running anyway can run then this fast LEM approach and will be discussed on Wednesday on Wednesday the 7th of January so after the Christmas break there's an exercise on the 7th and then from January on we will actually look into the graph based approaches so from my side the particle filters are over and looking forward to next step and see you all next year thank you very much
|
SLAM_Course_2013
|
SLAMCourse_04_Extended_Kalman_Filter_201314_Cyrill_Stachniss.txt
|
welcome to the second part of the course we are looking now into one specific implementation of the base filter which is the um C filter and extended common filter which are kind of two variants of the um common filter Paradigm the common filter is probably the most frequently used um base filter it's used in a lot of applications developed 19 around 1950 um and it has nice properties for the case that you with G distribution and for the situation that we have linear models you can actually show that the optimal estimator so there's no better way for estimating and that's kind of very nice property of course in reality nothing is perfectly gaussian nothing is perfectly linear so um the system the C filer may not be the optimal solution to address the slam problem but that's something we will experience here um during this course so as I said we um why do we do why are we doing that because we want to address the slam problem solving simultaneous localization and mapping estimating the pose of the robot and the map of the environment this is a state estimation problem so let's look in how State estimation Works um we we introduced the base filter as a general framework a few minutes ago so we have the prediction step and we have the correction step and um we will now look into the common filter and see how it realizes actually the prediction step and the correction step so the common filter is a b is a base filter one implementation of the base filter and um it's um it it requires that your models are linear and your distribution are gaussians it really makes this assumption say that's what I assume if this these assumptions are are Justified then actually this is as I said before the optimal estimator in this case so before we dive into the details of the common filter just a very very short repetition on what a gaussian distribution is so this is the standard equation for the gaussian distribution so it's kind of a vector here sitting in front um an exponential function here um there's the mean estimate which is kind of the mode of the gaan distribution expressed by mu and a covariance matrix um Sigma over here um which tells us the the higher the values in The covariance Matrix um they higher the uncertainty and um so it's actually what's sitting here is the inverse of the covariance Matrix so um this thus shouldn't be have a zero determinant otherwise it wouldn't be invertible which kind of means have an infinite uncertainty um then we don't know anything and we can't can't divide by zero so to say in the 1D case that's a typical Gan distribution in 1D and in 3D um kind of this is the mean and you have 1 2 3 Sigma away if you're three the area below this distribution from minus 3 Sigma to 3 Sigma covers 99.9% of the probability mass approximatively and so most of the events actually fall in this area you can press this as I said in in 2D bpse in 3D binoid and it's kind of the standard gaussin distribution as you should have seen that in the past already um so if we have a gaussin distribution which has multiple variables let's say x is two dimensional actually this XA and XB could also be vector is themselves and the distribution is gaussian so it's kind of let's say we have a four dimensions XA and XB are again two two-dimensional vectors it would be a four dimensional vector and we have a four dimensional gussian distribution just as an example um then the marginal distributions are also gaussians and the conditionals are also gaussian so if I know that P of X is a gaussian distribution then I also know that P of XA is A gaussian and P of XB itself is a gaussian distribution and I can compute this distribution by marginalizing out the variable B XB and the same holds for the conditionals so if I know that P of X is a gaussian then I also know that P of XA given XB is again a gaussin distribution and the same the other way around P of XB given XA is also normally distributed so whenever I compute a marginal distribution or a conditional or same H for for convolution then the resulting distribution is again a gaussian it's kind of something important for the common filter so if we manipulate uh our distributions like conditional you have seen for example for the sensor model and for the for the motion model um it's important that these properties stay gussan distributions so even if we update our gussian distribution with the motion model within observation model we still need to make sure it's still a gaussian and therefore these properties are important for us so in the gaussian case in the regular gaussian distribution um it actually turns out that marginalizing outter variable is very very easy there's not there's nothing bad about it nothing difficult about it so if this is p of X it contains two variables x a and XB has mean and variance matri Co variance Matrix and if we can express this by let's say the first um n Dimensions correspond to x a and the second M Dimensions to XB then we can express the mean as uh so this part is the the the first um uh n dimensions are the mean for a and the second M Dimension the mean for B and if the cence Matrix has this form then the marginal distribution over XA is the integral over the joint probability distribution integrating out XB it's again normal that what we said and the mean is just the first Dimensions from that mean and the covariance Matrix is just this block of The covariance Matrix so if I have a high dimensional gum distribution and I want to compute the marginal for small number of elements I just need to cut out a part of the u mean vector and cut out a part of the covariance Matrix and that's it I'm done so that's very simple and can be done very very efficiently that's great um conditioning is unfortunately not that easy um we have exactly the same setup as before if we now want to compute the conditional distribution so P of x a given XB this is from the definition P of XA and XB so the joint divided by P of XB it's again Gan distributed but the mean and the um um ciance Matrix are now harder to compute I don't want to go into the derivation where that comes from it's kind of not nothing you do within 5 minutes takes a little longer um the important thing to note in here is it's actually is this b b part BB over here because this is inverted so that means in order to do that operation I need to invert this part over here and if I have a high dimensional gum distribution and I say I want to estimate just a small quantity out of that given I know the rest it's a very costly operation because I need to invert a large part of this Matrix and this may come up when estimating whatever the position of the robot given I know where the landmarks are I need to estimate this inverse it's also something which you will find later on in the Kon filter um that you need to invert parts of this Matrix and this makes it um quite expensive and well you can also see here for example um if we basically do not know anything um about the variable B so we have an extremely high uncertainty and we invert that this term actually goes to basically toward zero and what then end up with is just the mean is the mean of a that we had before in this part has actually no influence that means if if I if I say p of a p of XA given XB and I basically do not know anything about XB it's actually the same more or less the same than P of XA this is if this guy so if this guy over here is extremely huge this is kind of one divide by extremely huge which is close to zero so all this term goes away and I stay with my um mean of the variables of x a b having no influence on that the other hand if it has a strong influence the influence is mapped bya these functions and then there's a substantial change in the mean estimate just kind of to get a little bit an idea um what's going on that was just very very brief revisit of the properties of the gaussian distribution these are kind of the most important properties that are used in the common filter and as I said before the common filter assumes Gan distributions that will be covered and it assumes to have linear models linear models means um that the motion model and the observation model are linear functions and that's the way they are represented so the new state XT is a matrix a times the previous state plus a matrix B times the odometry command and this is just a term which says okay that's noisy kind of it's a random variable adding a noise term because expressing that it is noisy so that means we have two matrices A and B which may change at every point in time T and they allow us to map for giving the previous state and giv the actual control command to our new state and as we can describe this as matrices this our linear is a linear function in more than one dimension and the same holds for the observations so what this Matrix C expresses is how do I obtain my expected observation given I know the state of the system so given I know where the robot is and what the world looks like I can actually estimate what I should observe this was this function see do here again it's a matrix so it's a linear mapping between the um the world State and the observation space yes please where do a and C come from I have to specify them this is something my knowledge I need to put into the system in order to implement a c filter so I need to estimate how does a system move so for example um if you have a robot that drives on Wheels so um we say the C the new let's say we can only go for Ward because we living in a linear World a 1D robot so we are could State we can go forward we can go backward so no orientation everything is perfectly linear um then this Matrix a here expresses how does the system the state of the system and the the state of the world changes when nothing is done so if no command is executed so how does it change by itself typically a robot on Wheels only drives if the wheels are turning otherwise nothing changes so in this case this a would is typically an identity Matrix that's different if you have an helicopter which is in the air and you have wind for example even if you don't execute any command it will be it will be taken away whatever you want to estimate or you have even systems if you don't apply anything it's still it's still driving so let's say you have um whatever an object which um even if you execute no command um it's a continues to keep its velocity because there's a system underlying which control control of which which enforces that you want to to a high level estimation or the different ways so there are a lot of different things you can imagine why a system should change its current state even though one is not applying a command and the second part over here is just says how is a command mapped into a change in state so if I say okay I execute go one meter per second forward for one for one second that's something which is a command and this needs to be represented by B how this changes my state and this actually needs to know what's kind of what the physics of the system that they doing where are the wheels what happens if I execute a certain velocity for a certain point in time or turn the motors all the parameters May sit in this Matrix be the relevant ones okay any further questions so far yes please um these two equations represent the the whole um that we learned about the basan uh filter that these these two are the uh calculations how I get to my to the point where everything is is like I calculate my position um so these two terms are kind of the motion model and the observation model so these are kind of the the the free parameters that you have when you implement your your system you need to specify your motion model and you need to specify your observation model and what we presented here is kind of the mean for this for the motion and the mean for the observation model so there's typically you have still an uncertainty Associated to that because motion is never perfect but this kind of the the mean uh transformation that are done for um for the motion and this is kind of the observation model and we put these equations into the equations okay actually um yes that's true um yeah the equation that we will see is actually pretty hard very soon it's pretty hard to make the connection to what we've seen before this probability distributions the reason for this is in the gaussian case I only need to manipulate a mean and a variance and I don't need to specify all the gaussians um that's much easier if you do it in the way I'll present it very very soon but you're completely right so these are our three parameters that need to go in our equations that allows us to update our state in a recursive manner any further questions okay so again this Matrix a is an n as I said before is an n byn Matrix which tells us how does the state of the system changes if no command no control is executed okay then we have our Matrix B which is an N by L Matrix where n is the dimensionality of the state and L is a dimensionality of our odometry Command and this describes how the control UT which is L dimensional changes the state from XT minus one to XT again in reality B is often should be nonlinear but we have to be linear otherwise the Comm filter doesn't work so I have to fix that in some way and then we have our Matrix C which is in K * n dimensional Matrix K is a dimensionality of our observation and N again the dimensionality of our state and this describes how we map from a world state to an observation you can see that as um what should I expect to observe given the world is in the current state I just have to take the current state um XT multiply so multiply CT * XT and I get what I should observe my expected observation so it's like I'm standing here I know that the um wall is 5 m away so I should measure 5 M plus an uncertainty which is um Associated to my sens of measurements yes please what do I do when I don't know anything about the world do another gaan model if I don't know anything about the world it's probably a gaussian with a zero mean and a more or less infinite covariance Matrix everything is the same like it and as soon as I start collecting information then kind of this belief becomes more peaked and then I can make better predictions that's kind of the key ingredient of the of the C so so I always change uh I always calculate these matri yeah so this this is there's a t here so you can recompute them if you know more so the thing is if if yes so the the current state of the system has also Associated to it in in Co variance Matrix so it tells you how certain you are and this is taken into account in as we will see very soon how this is taken into account so this is only done for for the mapping of the mean um but we will very soon see how this uh how they covariance Matrix are also taken into account to take into account the uncertainty that we have but if you let's say if you know we have 10 landmarks in the environment but I have no idea where they are one thing I can do set the mean to zero and set them in Co variance uh entries which are more or less infinite this means just kind of a uniform distribution where they are distributed in space and the more I observe the environment the more Peak they will get okay and I have two terms these are random variables representing just the noise um in the obser in the motion and in the observations and they expressed by the coari mat RT and QT RT for the control and QT for the observation you have to say a warning depending which book you use R and Q are often swapped actually in the standard literature it's the other way around but I kept the notation of the problemistic robotics book whereas in this in in this notation so whenever you look at different up a different resource don't get confused you may just you may need to change q and R in their meaning it's just the control noise and the observation noise but don't get confused if they are swap it's just kind of they're different way or different standard notations so to say okay so we said we have our um matrices here now to express motion and observation model now let's take them and put them into our gaussian distribution because we said we want to have gaussian motion models and gaussian observation models so how does a motion model exactly this model looks like under um under uh the noise assumption with our linear motion so we have a Gauss distri we need to write down a gaussin distribution here um so given I know I'm in XT minus one and given I know I execute UT given my Matrix A and B I want to specify the gausin distribution which describes this probability distribution so we need to have something like P of XT given XT minus1 U some Factor the the prector of the gaussian which I don't want to specify in detail X to -.5 and then something comes here what do I need to put in here you've seen everything you need in order to specify that now or you should have at least um what's the mean XT it's just the variable that um so this should be a gaussian uh so it's kind of this a gaussian distribution in XT M mean and variance so this is kind of the variable we are investigating now this is kind of the X and how do we continue what's the mean ah we can we can do it better so consider you know where the system has been before and you know which command has been executed so where should the system end up with in which state sorry so we know where the system has been before we know which command has been executed how do we estimate where we should be okay how do I compute my yeah please exactly so let's do the full one ax T minus1 minus BT U and then the same stuff over here and what we need to put in here which one not not the meas observations no why the observations there's a motion command so we need the put the motion noise in here inverted so that means given I know where the system was before and given I know what's executed I can compute for every possible pose new pose the the likelihood using exactly this equation and this stuff here is the linear model so this is what the common filter assumed that we have this linear model in here okay so we end up with having exactly this term over here see everything fine yep that's exactly what we have okay now you're trained how to do that [Applause] guy proportional to X half times ZT minus CT XT times QT here we go that's exactly what we should get again this is the Assumption of having a linear observation model now we're coming closer to what you were mentioning this helps us to describe these probability distributions as gaussing distributions which we then can now plug into our common filter we can take this put this into this big equation with the integral and get this really really really huge terms so we now have our previous belief which we assume to be gaussian it's kind of what we start with in the beginning and then it's always gaussian we have our motion model which we described and we have our observation model that we described perfect we are done again showing that this is always is a g in how the how the mean and variance parameters look like something which is really non-trivial so you can see it prob robotics book section 324 you find the details but I just said we know how to specify all this distributions they are all gaussian so the sum will be gaussian and the algorithm to compute these gaussian looks like this it's kind of the key common filter algorithm lines two and three are the prediction step which tells how do we make our prediction four to six are the the correction step I completely agree that it's hard to see the correspondence between what you see on this slide and the previous slide it's just that if you write the gaussians in here with all these models that we specified on the Blackboard and you turn that in need to compute a new gussan from that you will exactly get this mean and this covariance estimate in the end if you do your Corrections right um which follows this algorithm it's just a more compact way of writing that and I will discuss this a little bit in more detail so what we see here this is the the the bar over here Ates the predicted mean so this would be the mean and the coari Matrix after we execute our motion okay as we said this just the previous mean a t * the previous mean plus BT * U so it's exactly our linear model that we described before and if we do that so assuming that let's say assume now I is an identity so not no one externally changes my state then the uh the new uncertainty is the old uncertainty plus the uncertainty we add through the motion so if I have C an uncertainty and I execute New Motion command I increase the uncertainty so this function also tells you that the um the emotion always adds uncertainty it makes the system more uncertain because you you add the noise term if you have no motion noise you move perfectly it stays the same but that's never happens in reality and this this a why this a is is written here so a can be used for example for scaling your system if if you have a system where the scale grows over time or something like this um this goes into a how the state of the system changes without controls and therefore we have this a here it can be rotations if the world is rotating you can use this as rotation matrixes to rotate this state okay so that's a prediction step it wasn't too complicated now let's look into the correction step in the correction step if if you go back to this equation you can see this is a gaussian and this is a gaussian so you multiply two gaussian distributions if you multiply two gaussian distributions you again get a gussan distribution um and the mean of the new gaussian is a weighted mean by the individual the means of the individual gaussin distributions weighted with the uncertainty you have so if you have one Gauss which is very certain one G is very uncertain the product of both will be a gaum which is very close to the certain distribution and and if both have the same both have exactly the same shape but just the mean different positions you will get be exactly in the middle so this is kind of a product of two gaussian gives you a new gaussian where the new mean is a weighted mean and this exactly happens down here this K here is soal the calman gain and the calman gain trades off how certain am I about the observations and um with respect to the um to the motion so it looks a little bit complicated this formula over here but I would like to give you at least um a short um a very short intuition that this is a weighing something like a weighted mean okay so one thing that that can happen um if we have let's say we have the perfect sensor what happens if we have a perfect sensor what's QT then Z yeah QT is basically Matrix just with zero elements me if I measure something I perfectly know it if we do that what happens KT is the predicted uh variance CT let's just drop the time index T to make it um C Sigma bar CT this guy is zero so it's not affected to the power of minus one which I can actually rewrite as CT then CT to the power of min-1 Sigma bar -1 C -1 okay so that means this guy gives the identity this guy gives the identity so the only thing remain is C to the power of minus one if this is the case I can plug it now into the equation number five to compute the mean so my mean is the predicted mean plus the KT uh the the common gain which is now we said that should be C to the^ of minus1 um that T minus C txt right yep correct okay if this is the case let's disable the projector can better see it if this is the case that means the equation go simplifies to predicted mean plus C to the^ of 1 ZT minus C -1 C XT [Music] um stick here ah we don't have the XT here this is Mu the predicted mean so this goes predicted mean so these guys go away so what's written there is mean minus mean eliminated it's only C to the^ minus1 x z and C to the^ minus one is the inverse function which now doesn't map from the state to the observations but from the observations to the state so this just takes the observation Maps it to the state space and says that's what you new mean this is perfectly in line because we said if we have the perfect sent that we measure once we know what the world looks like so all the prediction we did before is completely erased and only the um the the observation remains we can do exactly the same if we have a sensor which provides me really no information if it provides me no information this QT so this guy here should be more less Infinity so that means this sum turns also into something like infinity and then we have Infinity to the power of minus one so basically the common gain becomes zero the common gain becomes zero says a me mean it's just the predicted mean so the sensor information didn't provide me any information so what this filter basically does it computes a weighted mean between the prediction and the observation okay have a short example which actually shows that so if this is my prediction step so that's what the system should look like now I get the measurement which is the green line I merg both of them and so what I get out is a blue one you can see here the blue one is closer to the green one than to the red one from the mean point of view because I was more certain about my measurement so this is less variant than the red curve and so it's basically really just the way it mean can continue this in the next step so this was the previous situation we had before so this is now my my current state estimate I make a prediction so here move forward from whatever 5 m or 7 m to 202 M so 50 m forward so then this is my new prediction because I get more uncertain when executing the action then I get a new measurement and then it's a weighted mean between both and here they are more or less the same so the the the the new mean sits somewhere in the middle and then we can continue this process okay so um what we what what I've shown now is that the common filter or the is under the assumption that we have linear models and that we have Gan distributions we have a way for computing the new mean and the new covariance Matrix based on the previous one and the command and the observation model and that is basically a weighted mean just weighs how certain am I about the observation how certain am I about the motion and then combines this as a weighted mean that's all it is now let's go to uh let's meet reality um the carbon filter assumed gussian models uh linear models and Gan distributions um question is what happens if this is not the case actually turns out that for most realistic situations this is not the case because as I said in most realistic at least robotic scenarios you have an orientation somewhere um because the system is looking to some direction or takes a measurement in a certain direction this involves angles that directly leads to S cosine functions which are um nonlinear functions which then lead to problems so what happens if we actually get rid of our linearity assumption say we have some nonlinear function G which tells us how we uh map from the previous state in the motion command to the new one plus some gaussian noise and the same for the observation so we have nonlinear observation functions which maps from the state to the um to the to the observation space what happens if we do that and say we just ignore that the stuff is linear if you do that um yeah let's see what happens so this function this plot maybe a little bit hard to read in the beginning but actually I like it a lot so what you see down here is the current mean estimate the current estimate of your gin distribution if you transform that through a linear function this is kind of mirroring this function here and this at this line so that is what your the resulting function looks like so it's kind of if you take this you take every value here map it through this linear function you get the corresponding value here kind of and if it's a linear function can nicely Expresses in this way so you have a gaussian here you map it through a linear function so the result stays gaussian that's important for the Calon filter because we want to do it once one step after the other and everything should maintain gaussian if something is not gaussian in we break look what happens if we are um we have a nonlinear function so now this function is a nonlinear function it's not particularly ugly we map our our GA distribution through this nonlinear function and that's what we end up with and however good your eyes your side is this is not a gaussian distribution that's the problem that we have it is definitely not a gaussian distribution so the problem is if we just apply our nonlinear function it will simply lead to the fact that we won't end up with a gaus distribution and then we can't apply our Caron filter anymore so the nonlinearity destroys the G and then it doesn't make sense to compute a mean in a variance if we are so far away from the G distribution okay what can we do to resolve that so nonlinear functions lead to non-al distributions which we need um what can we do to fix that think about a dirty trick how would you fix that yeah we could make aim yeah we could just linearize our function just say I ignore that it is nonlinear just take the best estimate I have and just linearize it around that best estimate that's exactly what we do we do just local linearizations it's everything the common filter does the extended common filter does the extended common filter fixes the problem that we have nonlinear functions by linearizing those functions and then do basically exactly the same what the k filter does okay how do we linearize this function so we have our function which Maps the current uh from a known State XT minus one executing a command UT it gives me my new state this is nonlinear so now we need to actually linearize this what we do is we evaluate that function at the known best estimate at the moment x m is Our Kind best estimate what we had before and then compute the first derivative so partial derivative of this function G with respect to T minus one and then say how how far am I away with my XT minus one from the from the best estimate I had so if I want to if I put a new XT in here I actually write this as X without T minus one and then this should also be an X here um so how does this so this is kind of evaluating at the linearization point linearizing around the linearization point and how far am I away from the linearization point just kind of first order tailor expansion we can do exactly the same for the correction step just with our function H here these guys here HT and GT which express this thing are jacobians so jacobians who knows what a Jacobian is okay so just a very brief revisiting Jacobi Jacobian matrix is a nonsquare matrix it's typically M by n where we have um given we have a vector valued function so our function G which has M components what the Jacobian doeses it computes um a matrix with all the partial derivatives so this is the first dimension of the function derived with respect to the first variable this is again the first um dimension of the function derived with respect to the second variable and so on and here you have uh from function G1 G2 G3 so it's kind of a matrix which contains all the partial derivatives of the individual um dimensions of the vector value function with respect to the variables involved and this is kind of a generalization from the 1D derivative if a 1D space you just have DG by DX so this is kind of the 1D case which you all know and this is just a generalization for the uh higher dimensional case and if you visualize this um you can express with this let's say this is your function uh the the the green curve over here this gives you a plain just kind of a linear approximation in that point over here or for for any other function you have like a parabola over here can take this linearization point and gives you a kind of a plane in the high dimensional space it's kind of what it kind of um looks like so this is kind of from from kind from a line to a plane in 2D so this is is kind of these functions are as linear functions so they matrices which are linear functions because I fix this for one linearization point and then I get a matrix filled with values no variables involved anymore and then this is a linear function of course this only holds for this one single linear uh linearization point for different linearization point I need to recompute it okay so we had that case before we had our nonlinear function we actually screws it up what we do now is take our best estimate which is this guy over here and fit a linear function in here we put in our tailor approximation and then we just map it through this nonlinear function and then we actually get this red curve over here and we have again a gan distribution it's going to the best fix we can do best ad hoc fix that that pops up to our mind yeah we do that for every point of the nonlinear we we take the mean of our current estimate and for the mean yeah otherwise we would again have a nonlinear function otherwise this function would change and so this is the thing so but if the mean would sit let's say somewhere here we would linearize a function here we would get a completely different um linearization so we take the current mean and this is our new linearization Point what you can also see in here so if we would compute um the mean from mapping through this nonlinear function kind of would approximate this function here by a gaum we would get the blue one and if we do the linearization here we would get the red one over here and there's a little display between them if the Gan gets fatter down here so higher uncertainty actually the difference between the red and the blue curve gets bigger if the uncertainty gets smaller down here the difference between the blue and the red curve gets smaller over here so this is an observation as you linearize the function through one point if the if the all the relevant values are not too far away from the linearization point that's actually a pretty good estimate the problem is the bigger my values are away from the linearization point the more uh the worse the approximation actually is okay if you look to the linearized motion observation model we have exactly the same uh what we had before with this a t uh plus BT is just that we have our linearized model now put in here in our in our Gauss distribution so this is XT minus this is evaluated at the linearization point the previous mean minus the Jacobian times how far am I away from the the previous linearization point we can do this for the motion model exactly the same for the observation model where this turns into this the observation function now at the predicted mean minus the Jacobian of the observation function and how far am I away from the uh from the linearization point then I again have linear functions again because these are linear functions and I can apply the standard Caron filter I can really do exactly the same so if you look to the common filter if you move to the extended common filter the only thing which has changed is kind of this function over here is now not my ax uh plus bu U is my nonlinear function the same down here it's this was before um c t time um XT which is now the nonlinear function and the second thing is oops we redo we have to redo we have to replace um the everything The Matrix a by G and the Matrix C by H it's down here down here so this kind of C by H is kind of linear approximation of what we had before that the only thing which changed so you can say the extended caralon filter is exactly the Kon filter the only thing it does it took the nonlinear functions linearized the functions and just kind of did all the steps which are needed to to to fix the problem just by stupid stupid linearization so we just linearized our nonlinear functions and used the approximation of this this linear approximation of the nonlinear function and put it in executed Aon filter the only thing is this these functions uh G and H of course need to be recomputed at every point in time because the linearization point changes so we need to rebuild those matrices at every point in time because as we move on we may have a different linearization point and then of course the the the the first derivative changes okay and yeah and if we actually if we have a linear case then this linear function does make any doesn't lead to any problem and then this G actually turns into a and this H turns into c um and we have exactly the same uh solution so if we have if G and H are linear functions the EKF gives me the same result than the common filter would do okay time to wrap up um what have we learned today should have learned understood the Calon filter in the sense what it means in terms of probability distributions just does directly what comes from the base filter and through the rules of how to multiply to gaussians we um and what condition marginalization uh if we put all that together we end up with this algorithm we have shown here um I haven't derived the algorithm by hand just kind of a little bit involved but you you can do that there's actually course material online where the derivation is in if you really want to dive into the details um but I'm not sure how much this actually helps you to understand what the properties are and the the most important thing is that it it's a weighted mean between the observation so between the prediction step and the correction step which comes from the uh the prediction step from the motion model and the correction step from the observation model and just computes a weighted mean out of both the problem of the Calon filter is it is requires linear functions so we introduced the extended Calon filter as a fix for the Calon filter in order to deal with nonlinear functions and the trick is just to linearize function around the current best estimate which is the mean from the previous point in time and then compute a linearized solution and um that actually works well in practice for let's say moderate nonlinearities so if you're uh if you have a completely odd nonlinear function it will screw up quite quickly but if your nonlinear function is not too bad it actually works quite well in practice and together with the fact that um if your uncertainty is not too huge if you very huge uncertainty you have just by illustration seen the bigger the variance of this gaussian the the worse the approximation gets um and so if you have moderate uncertainties and moderate nonlinear functions moderately nonlinear um then this actually is a system which actually works quite effectively um in terms of complexity um so we have two terms which influenc it here K was a dimensionality of our observation and this is two to the it's K to the^ of 2.4 why is this odd number over here is because this results from an inversion because we needed to we have this function over here the dimensionality of this function is the uh is K we need to invert it and the fastest way to inverter Matrix um is K to the^ of 2 point whatever 38 something um if you do it in this in the most stupid way this would be cubic I mean this is still close to cubic and the second is you need uh it's squared in the number of Dimensions because the matrices you need to represent are square matrices The coar Matrix for the state which you need to update is um n byn so I need to at least manipulate n Square values for just uh updating this Matrix that's the reason where the complexity comes from so if you have either a huge number of variables you want to estimate this term or you have a very large observation Vector this term that may be very costly just to keep that in mind but if you have let's say a small number of Dimensions you need to estimate a small number of um dimensions in your observation space that's actually pretty efficient okay again link to literature if you know want to know more about it probalistic robotics book chapter three goes into the details of the Caron filter I try to keep the notation here exactly as in the book so you can if you said I this one step I actually missed I didn't understand from the explanation that I gave um you may revisit the book or ask me whatever you prefer um there's a sh and Lon manipulation the multivariate gaussian density which tells you how to do um conditioning in detail how to do the marginals and takes all of that together and just as the byproduct derived the Calon filter in the appendix so if you really want to go into the details of the Caron filter how to derive it that's a pretty good um uh paper to look in it's also on the website or there's a general kind of easy more easy to read tutorial on the common filter by um B Bishop which also kind of worth reading if you want to know more about that that's it from my side are there any questions okay so then thank you very much and you meet together tomorrow for the exercise so there will be no lecture tomorrow only tutorial and we also have new sheets here um for the next homework assignments but it's just kind of a very short um kind of recap of Base rule in these things so shouldn't be too dramatic and then see you next week thank you very much
|
SLAM_Course_2013
|
SLAM_Course_08_Sparse_Extended_Information_Filter_Part_2_201314_Cyrill_Stachniss.txt
|
okay so welcome everyone to the second part of the sparse extended information filter before we continue with the two remaining steps I would like to very very briefly give you again the big picture also this kind of a two to three minute repetition of what happened last week so the key idea of the sparse information filter was to introduce an approximation to the excellent information filter which is the dual representation of the extended Kalman filter and exploit use it for the solving the slam problem but at the same time introduced an approximation that leads to sparse matrices especially as far as information information matrix and if they have such as far as information matrix we can do most of the operations much more efficiently that was kind of the key goal of the sparse external information filter to come up with and substantially more efficient actually a constant time algorithm for addressing the slam problem that in this case means we have only need to do a constant number of operations and it's the this number of operations is independent from the number of features we have in our map it's kind of the key goal of this algorithm and was motivated by the fact that if we look to general gaussian estimate here of landmark positions and we look to the covariance matrix which we obtain such a dense matrix that means in this case that the robot polls is correlated with all landmarks and that the individual x and y positions of the landmarks are correlated with those of the others however if we invert this matrix and look into what the information matrix looks like we actually see this pattern over here note that most of those elements which are seen here is white on the slide are actually not zero elements they are but are they are very very small values and the questions can be kind of replace this matrix by a very very similar matrix we can offset those small quantities to zero and then obtain a matrix which has just a small number of elements which is sparse a small number of nonzero so that we can exploit this fact in all the computations and obtain efficient algorithms that was kind of the key idea that we want to do of course we don't simply set all those values which are very small to zero we do it in a different way but this kind of the motivation behind that and the key idea that the sparse external information filter for slam exploits is to separate features into two categories for actually three in the end we have active features and passive features so active features are those features where there's a direct link between the robots pose and those features so these are features which are typically observed of our observer feature obtain additional information and I add elements to the off-diagonal fields of my information matrix and there's only a small number of those active features to be glued some constant parameter in the scythe implementation and all the other landmarks are so called passive landmark or passive features and we don't maintain direct links between the robots posts and those features and there's the third category which are features which have been active before and then become passive because with those features the those features we have to treat differently as we will see very very soon just as a short reminder what was the effect of a measurement so in this example again this is our information matrix before introducing any observation so we have some uncertainty about the robots pose but they're no landmark has been observed you have three landmarks M 1 M 2 M 3 in the environment and if I measure for example the position of the first landmark relative to the robots pose I obtain such a link between the robots pose and landmark and this corresponds to this off diagonal element if I also made her M 2 that I will get those links between the robots pose and M 2 and so whenever I measure something I add new information between in this information matrix but between the robots pose and the corresponding landmark that I measured it was a measurement update so I add additional elements into this matrix if I looked what happens during the motion so this was a step before the robot performed any motion so what the robot does it moves could continues traveling through the environment and is now at the position X T plus 1 and what I do is I want to get rid of X T and only maintain XT plus 1 you can see that as adding XT plus 1 and marginalizing out XT and if we continue to move forward that means that those two landmarks become correlated because I kind of add additional uncertainty into the robots pose so the new state at of the robot NXT +1 obviously depends on X T but introduces additional noise due to the motion and this leads to the fact that there's a direct dependency between M 1 and M 2 given all other parameters and so some of the probability mass kind of moves down here and this element was before zero and so if I look into the emotional update what does the motion update does it weakens the links between the robots pose and landmarks because we're what moved so it gets more uncertain and knowing the pose of the robot kind of tells me a little bit less where the position of the light mark is located and it but it also adds links between landmarks these are those landmarks where there were links direct links to the robots pose so they those guys become connected the problem is if I continue this process I will have connections between all the lab Mart and the robot poses and this is something I would like to avoid I want to avoid because in this case I wouldn't have a sparse matrix anymore so and therefore the sparse extended information filter introduces an additional step and this is a specification step and the goal of the specification step is to generate and maintain a sparse information matrix so if we are here in this situation so this element here corresponds to this link and what versification algorithm does it kind of it says okay I'm going to remove this link over here and so get rid of this direct link there's an approximation and so that there's only link between m2 and the new hose of the robot so the effect of this participation is that these elements become 0 and those elements here become stronger beakers an additional link so if you look to the sports vacation what happens in this parts of occasion we go from this kind of matrix to this kind of matrix but we only do that for active landmarks oh well if you have an active landmark this landmark to which we maintain a direct link whenever be sparse if our matrix that means we typically get rid of an active landmark to an active landmark into a passive one because then there's no direct link between the robots pose and this landmark anymore and this kind of idea of using active and passive landmarks and doing a sparse vacation based on those active and passive landmarks is an approximation because you assume conditional independency between different variables in your state space and realizing this with this active and passive landmarks was kind of the key idea of the sparse extended information filter for slam and the key idea is again the active Flint marks are subsets of all landmarks these are those landmarks to which the robot maintains a direct link and certainly there are limits itself to the number of landmarks which it maintains a direct link and whenever this number exceeds that the parameter that the robot set itself it eliminates those one of those direct links and this is a specification step all other light marks are passive that which are not active and the only thing I have to take care of is kind of a light marks which are active at the moment but then turn into a passive one so where this direct links are removed because its removal it's a sparse education process so again if you look to our example here m1 was in LED back which was active before now turns into a passive one and two is light Mac which is active and stays active and landmark m3 is a passive was a passive one this case okay so the key idea to maintain to obtain and maintain as far as information matrix in the sparse extended information filter is to use the specifications the specification is carried out in every iteration of the algorithm so after every promotion update and measurement update specification step is carried out in order to make sure that my information matrix is sparse because all the computations that we did before which we introduced last week we're essentially relying that the information matrix in sparse is sparse in order to be executed efficiently so and this step ensures that the information matrix is in stasis bars without that's dead I could do the operations I did before or we derive last week I could perform them in the same way but they would be computationally costly because the matrices are not sparse anymore so we come to the four steps of the scythe slam algorithm and here I have to make a note that we have there was on the previous slide as well in the book and in all documentation I found an error in this scythe algorithm and what was done before that was that this the the state update was here in the second line so the second assert line were swept but that's actually that was wrong as was pointed out by China that this doesn't make sense because what happens is we need to or the so why do I maintain this mean why exists this strange update status to made step over here the problem is I need an estimate of the mean for several reasons the first reason is the propagation of the robots motion the nonlinear function G required the mean as an input so the first step here motion update requires a mean the second thing is the second step the measurement update requires a mean because I need to compute a predicted measurement in order to compute a predicted measurement I need to know certain elements of my state vector in order to compute where's robot right now where's landmark according to my estimate in order to predict what the robot is going to observe and also as we will see the specification step will require a mean estimate so we need to have the mean maintained and in the first step computing the new mean is simple it's easy it's exactly the same way as we did it in the EKF so there's no no problem with that so I actually have the predicted mean for the measurement update but after performing the measurement update I don't have the corrected mean anymore and I need the correct it means and for the specification step and as the reason for that there is the the the update status limit function over here which we very quickly only look into which kind of updates the mean it's a kind of in estimates approximates the corrected mean given the information vector information matrix and the predicted mean and then we have the sportif ocation step so because so this was swapped in the lecture recordings of last week this is wrong this is the correct variant of that so the correct it mean is written out it's estimated after the measurement update based on the canonical parameters and the the previous or the predicted mean estimate okay so as a result of that now the individual steps also nicely in line as I designed them before so we can we redid the first two steps a last week which was the motion update in the measurement update so what we have now is we have after we executed the second line we have an information matrix computed which is potentially not sparse anymore we have an information vector and we have the predicted mean but not the corrected mean this is what the result of the motion update and now we have to look what happens how can we actually recover the mean the corrected mean estimate based on those quantities again the predict mien is important for me it's just what I said before for for my in the motion model for the measurement model and also for the sparsa fication step in the promotion update the predictor step I don't need to do any substantial efforts because I have the old mean which is a parameter of my of my of the size algorithm and then I just have this delta function this is kind of the component of this non linear motion update function and I just compute the predicted mean based on that as I exactly did that in the EKF so this was the last step of the motion update we just did it because we needed that and that's it so we have the predicted mean directly the problem now is that we can't easily compute the corrected mean we can in theory compute that but not efficiently because remember we have the information matrix we know the information vector so in order to compute the mean we just would need to perform this operation over here the information matrix here however is potentially not sparse anymore of course we haven't specified it yet so this operation is costly and that's what the sparse extended information filter for slam does it performs an approximate approximation of this operation here just to say how should the correction of the mean actually look like so can I correct a few of the dimensions that actually matter to me and get a better estimate so that I can use it in this participation step I have a better idea of what the the correct it mean looks like so the key trick in here in this approximation I can do that is that only a few dimensions are needed of that mean so what a typically mean M what I typically need is the position of the robot and the position of the active feature of the active landmarks so it's just a small subset of dimensions which I'm interested in I'm not interested in all dimensions that's kind of the first thing that I do and this end or the under this assumption and given that I have a good estimate of the mean which was actually my it's my predicted mean I only missed the last measurement update I can actually treat that as an optimization problem or as a gradient descent for example using gradient descent or any other technique in order to find the minimum or maximum of function and in this way get an estimate only for an estimate of the for the correct it mean for a few dimensions and the way this is done is as I said okay what I'm trying to do is I try as if we try to find the configuration of the mean that maximizes the the overall belief so if I can just expand this so this is the just all the constant vectors are not shown in here so what I kind of find the best mean estimate so I have a and I already have a starting configuration which is close to what I want to obtain because I already have the predicted mean I just need to do the last correction step so I kind of fun it locally you can see there's a local refinement of the mean so that it is better in line with the information matrix in the information vector because the information that I'm interested in is encoded in the information vector and the information matrix already I could completely compute it but this computation can't be done in constant time so how do we perform an optimization what are different ways we can do that there are different ways the first thing is I could set up my objective function I could compute the first derivative of this objective function I set the first to zero then obtain system of linear equations I solved that in a iterate this process a standard way for doing this you one can apply different techniques ready to send techniques whatever I wanted us to a local refinement of this estimate and this can only be done efficiently here because I'm interested in a constant number of dimensions I'm only interested in the current position of the robot and the active landmarks and all the rest of this of the of the states actually ignore here and again this is an approximation because the theory this step for executed could not be done in constant time because because of the number of dimensions that my mean vector has but as I focus only on a few and I update only a few I can actually do that I don't really want to go into the details how this is done because it's kind of promises orthogonal the important thing to notice here we do a local refinement of the mean estimate in order to do the last Direction step that would be expensive to compute exactly and I just kind of do a a local search or local optimization with a good initial guess because I know the predicted mean already in order to come up with with an estimate of what the corrected mean looks like go no further detail because we look into these kind of problems later in the course so later on the course you have pretty good means to understand what's going on and to solve that but for the material today the kind of districts the attention from the main elements of the cipher algorithm therefore I kind of leave that out we will come to this step later during the course when we kind of try to solve a slab problem in a similar way but for now we kind of say we have a method to locally optimize my parameters with a constant number of dimensions so this is not a costly operation okay so this step is kind of done or let's say postponed to to a little bit to the future especially to the time after Christmas we look into these kind of problems and so the key thing I want to discuss today is how Parsa fication look like so how can we ensure that the information matrix is sparse so we only have a constant number of off diagonal elements independent independent of the dimensions of this matrix and what the sparse vacation step does it computes a new information vector and a new information matrix and the important thing is that this new information matrix is now a sparse matrix that's our goal and that is needed just to repeat that in order to ensure that all the computations we have done last week can be actually done efficiently because we always assumed to have a sparse information matrix okay so in order to dive into this participation what means sparse defying information matrix can anyone tell that to me told that a few times just kind of sorry so you kind of you want to ignore the unimportant one yes could you could you say that a little bit more precise so kind of what is the general thing we'd like to do so exactly and what does it mean to get rid of some ignoring some direct links or getting rid of some of the elements what does this mean mathematically so mathematically it means that we are assuming conditional independence of the variables so if you have two variables which don't have a direct link that means they're off diagonal element is zero that means those variables are independent of each other given all other variables so if you go from this step to this step that means the current pose of the robot XT plus one it's conditionally independent of the position of the landmark m1 given that I know I'm - if I don't know him - that's a different story but given that in all other dimensions so in this case m2 given that now m2 knowing XT plus one doesn't provide me any additional information where m1 is and the other way around if I know where I am - is additionally knowing where and oneness doesn't help me to estimate the XT plus one pulse of the robot so given all other dimensions the the variables are independent of each other's if they don't have a direct link it's kind of the mathematical interpretation of what it means to have an 0 off diagonal element in this information matrix and that's exactly the thing we are going to exploit so if you have a variable if you have a distribution which contains of three random variables let's call them ABC over here and this approximation this versification means replacing this P of ABC with a different distribution machine up with an approximation of their distribution by assuming conditional independence in this case that a is independent from B given C so P of a given B and C is the same as P of a given B a given C so knowing B doesn't contribute anything to a given that we now see and the same down here P of a given a P of B given AC is P of B given C so this is the approximation that I introduced so my p tilta my probability distribution which is the approximation should exactly refer to this effect okay so it's just given that we know everything else that's everything else is C over here this two variables are independent of each other okay so what we do is we say okay P of a B and C we can just factorize this this is equal to P of a given B C times B of B given C times P of C standard definition of the conditional independence so there's no approximation and then I do my approximation I replace this guy over here with this guy over here this is the approximation I do in this participation so I say okay give that a now C B doesn't contribute any additional knowledge that helps me to estimate a this is the approximation I do what I then can do is I can here simply add the term P of C divided by P of C this is one just edit this term over here and then I this simply leads to that P of a given C times P of C P of a and C and P of B given C times P of C SP of B given C and then this one term remains P of C so the approximation is to go from this belief saying this belief is equal to this belief down here this is what we do in this participation this is the approximation that we do and that is exactly this approximation is something we will now do in the context of the sparse extent information filter exploiting the individual landmarks active lightbox passive landmarks and active landmarks that turn into passage was there any question at the moment yes please so so the first line is this is just an exact transformation from here to here so this is just the definition of the conditional probability soils kind of if I have P of a and B is equal to three of a given B times P of B and this is exactly applied there are the only difference is that we additionally have a variable C involved and this kind of done twice so two steps there are three variables first the first two and the other two so the first line is is exactly it's no approximation in here and then the approximation is transforming this term into this term and this is I assume that a and B are independent of each other given C and this exactly corresponds to kind of getting rid of those or having zero off diagonal elements between a and B here so a could be the robots pose at time T plus 1 in the previous example B would be the landmark m1 and C would be all the other dimensions in this case then working to okay any further question yes please I can leave it like this and be happy with this one these are all just exact transformations okay I could leave here actually P of B and C so I could have saved those steps over here yeah so this this year is exactly P of B and C is this very question so it's exactly the same but you're right I could have only expanded this term and then leave here P of B and C yeah it would I would have less less terms in this equation absolutely right this is possible it means equivalent but you're right it would become more compact too right okay any further question okay so this so we kind of know we under have you understood what this for occasion does it assumes conditional independence between subsets of between two variables given the rest and this is now something we want to explore it in such lam in order to obtain an information matrix which is sparse so our goal is to approximate our information matrix so that it is becomes and stays sparse and the way this is done is by only taking into account the direct links between a small subset of the landmarks and these are exactly my active landmarks and this also has an effect that the number of links between individual light marks also is limited because the additional links between those landmarks are created for landmarks which has been active at the same time and not all landmarks active at the same time you can see there's a small sliding window which goes with the robot is in the surrounding of the robot so we all have landmarks that are all locally connected with their neighbors but it is a small number of connection that these landmarks will have it's kind of independent of the size of the overall environment so if I'll just have a local scene where let's say every landmark has whatever ten neighbors if I expand the size of the environment by whatever replicating this room 1 million times it doesn't change the local structure and so every landmark has only on average at least the constant number of different of direct links to other landmarks and it's independent of the overall size of the of the of the environment or the space that I'm going to to map and links are created between landmarks which have been xed it have been active at the same point in time ok okay so again so the key trick is only considering those active landmark mint marks during our update steps so in the update steps like the motion update I'm only or no that's true so the key trick is to have only direct links to from the robot to the active landmarks and if I only have direct links between them then the motion update automatically enforces that only those active landmarks get connected with each other through direct links and not the other landmarks okay so this slide already had just remind you active landmarks are just a subset of the of all landmarks and all the others are passive landmarks and we actually go right now a step further we have flat marks and plas which are the active landmarks we have m0 which are the active ones which now become passive ones and then we have a - which are those which are which are passive okay so these are kind of the three quantities which are important now for our specifications okay so we said we want to remove the direct links between the robots pose and the active landmarks which become passive so all that marks which are active and stay active we have to don't have to do anything all that marks which are passive before and stay passive we don't have to do anything and we only have to this participation step remove the direct links of those landmarks which are active but now turn into passive ones why do they turn into passive ones because the robot for example observed a new landmark which was passive before then this becomes active and is as I only am allowed to have a constant number of active landmarks have to throw one throw away one probably the one which I haven't observed for the longest period of time it's kind of like a knife or just a buffer which is sorted according to the time step when I have seen those landmarks and when I ever add and you want to have to throw out the oldest one that's kind of one way for maintain your set of active landmarks so whenever one is kind of thrown out of my list I need to make sure that there are no direct links anymore to the robots post otherwise I would get my dense matrix again this is equal to assuming and conditional independence given all other landmarks in the robots pose and this landmark which becomes passive and this operation is important to note that this does not change the links of the passive networks so all that marks which are passive they stay passive so I don't add any new links to whatever passive landmarks otherwise my matrix would become maybe wood becomes dense again but that's nothing which happens so I rewrite now my posterior P of XT and M given all my data into four variables the current pose and my three different categories of landmarks active ones active on speakers become passive in passive ones and now I kind of continue the next slide with getting rid of always performing this specification that we have seen with our variables a B and C before okay so what I do so I just dropped all the dependencies on the observations and controls here so think of it there would be and that would be there just have not written them down otherwise it gets a little bit to the slides get a little bit too crowded with variables okay so this was a step you've seen in the previous slide so we just expanded kind of M by sayings and plus and 0 M minus then the next thing is definition of conditional independence exactly what's from beyond still on the blackboard in this case a corresponds to X T and B corresponds to the three variables over here so we turned the joint belief P of a and B into a P of a given B times P of B all right well that's nothing bad has happened okay now let's look to the to this to this estimate over here what does this strange equals zero means the thing is there are only direct links between the robots pose and the active landmarks and those the active ones which become passive which I'm gonna get rid of so there are only direct links between XT and plus and currently m0 there are no direct links between XT and M - right so whatever well you those take I don't care curses doesn't impact my current estimate so I can do that I can assume I set them all to zero just least in this expression over here say okay they are all zero zero zero zero that's what I assume for the moment for doing these operations because I don't need them whatever value they are are just kind of like I think I condition on them and I just can set them to zero because I know that they won't impact the the the other pose okay okay and then now okay so exactly that's what I just said given the active landmarks and the pair the passive landmarks do not matter for computing the robots post so I can set them to zero then the next thing is this is now the sparsa fication that I do is I assume a conditional independence between the robots pose and those landmarks which have been active before now become passive this is now this fortification that I do I assume given all other landmarks so given the active ones and given the passive ones which are all other elements of my state space the robot pose and the guys which become passive or independent of each other this is my assumption this is my versification assumption given everything else XT and m0 should have no direct links right this was my this is the Assumption I do so now I make the mistake here or it's not proximation so that's not it's not exact so I introduced an approximation error here and this is what the sparsa fication does because I ignore conditional independence therefore m0 simply disappears here I don't need it anymore because I know well this is my assumption that what I know is my assumption that the robot pose is independent from zero given all other landmarks the active once in the peasant was it's it clear that's kind of the key step in here okay now what I then can do is I can just rewrite that in a similar way that we have done that before I can say okay this now I can rewrite this term stays exactly the same and this term over here can be rewritten in the joint belief of M X X T and M plus given M minus divided by M plus given M minus equals zero there's something like exactly the same thing with this C before P of C divided by P of C this exactly these two basic this expression over here and this is now my sparse if I'd belief this is Pete P tilde of X T and my belief so questions how do i compute this term over here well this is another expression that I need to compute right so need to replace my current information matrix by the information matrix which corresponds to this expression over here okay so let's have a look how we can actually do that so this is just a summary that we see what happens just a summary which was written on the previous slide except that I just edit these the observation of the controls just to have it kind of formally correct written on the slide and so the the key trick is to replace information matrix I still know this Gaussian distribution this information matrix here it which is this tilde so this approximated information matrix by a sum of three information matrices Omega one two and three and they correspond to these individual elements in here so they correspond to these individual beliefs and now I just have to compute what are the information matrices of these individual terms and then I can actually then set them up and this can actually done it's not too difficult so the first thing I do I take my original information matrix the one I have before and I condition it on M 0 M minus equals 0 so this gives me a new information matrix which is believe P of XT M plus M 0 given M minus equals 0 this what happens over here right change it into the belief the first step is this Sigma 0 T corresponds to P of XT M plus M minus 0 sorry given M minus equals 0 just compute the conditional and then I get this information matrix which corresponds to this belief ok okay what I then do is i marginalize out and zero so this gives me an information matrix Omega one which is just the integral over my M zero so this is P of XT and plus okay or just just I just marginalize it out so it's simply not there anymore M 0 M plus given M minus equals 0 D M 0 so this is the information matrix which corresponds to this belief and I'll do exactly the same imagine lies out X 0 and M 0 from this belief which gives me my information matrix 2 so that means I take the information matrix I have here which was this guy over here right from this guy I get rid of X and M 0 so I get rid of X M 0 so only M plus remains which then leads me to exactly this term down here and then I take again this belief and get rid of margin eyes out X T so this gives me to this belief which is this term marginalizing out oh no this I do from the original information matrix sorry from this term I just multiplies out X so matter has odd X so the rest remains and it just can apply the modernization rules and conditioning rules for informational matrices for the Gaussian that we had discussed at the beginning so one is easy in information form the other one is more expensive information form this exactly just performs these individual operations over here and then I can generate my final information matrix so already have the information matrix of sparse information matrix for my for my belief and if I have that okay say okay now I can compute my sparta fight information vector by using my or so the information vector which corresponds to the spots for information matrix by taking this bar spot information matrix multiplying by the mean that's what I would need to do so I can expand this this term over here into this equation over here is just take the original information matrix subtract it so this is zero plus the term I have here times the mean so it's just adding means just adding is zero so I can expand that say okay this is the old information matrix times the mean this is the and this is the difference between the sparse if I'd information matrix and my the the original information matrix times the mean so this again gives me the old information vector and this is the difference in the information matrices if I write in this way this can be very efficiently implemented because what I do here say the C tilde is what I want to compute it's just my old information vector that I had plus just solve elements where the information matrices differ times some vector so this guy here will be basically zero everywhere except of the few landmarks which changed to this ports of vacation so where this fortification lead to a change in the values of the information matrix so there's there just a small number of elements which are nonzero and so I can just update my information vector very efficiently without having to compute here please matrix operations in reality it's kind of the key trick again so just small number of elements here the difference are nonzero because just a difference between the sparsely fighting the non sparsa fight matrix for just getting rid of maybe one or two active landmarks which became passive so just a few elements are set to zero and the few elements chain mates have changed their value of the active landmarks but the rest stays the same so it's just a very small and local change okay so this is in what this purse is falsification algorithm looks like this guy here this ugly book it's just the individual conditioning and so this was the conditioning and the rest were just the marginalization steps of the of the information matrix multiply Z out the individual values and this matrices F here adjust these projection matrices that we used already so far we only use it for the state so we use F X but now we can okay F M 0 is just everything 0 except those elements which correspond to the landmarks which are the active ones which become passive just M 0 and the same thing for different subsets so it's just kind of these projection matrices which are 0 everywhere just have B 1 ones in a small area so these two steps here correspond exactly to this this operation that we had before just haven't expanded that but it's just I don't wanna say straightforward but just applying the marginalization conditioning Rosa takes you up a tree right that but the individual state there's no black magic and then obtaining with these may be a bit scary formulas is really just modernizing our variables and conditioning on a subset of the variables that's pretty good we're done all the important steps of the sports extended information filter have been done so what we have now is we have an algorithm which performs all this operation information space and additionally maintains the mean which I need for basically all operations it has a motion update in constant time assuming that the information matrix is sparse it does the measurement update in constant time assuming that the information matrix is sparse it performs the state update yeah nearly constant time because just involves the constant number of dimensions but it's an iterative process or the certain number of iterations I need to carry out and or such steps in my gradient descent and okay of course you can limit that but you don't know if this guy has converged or not the process so take it a little bit with care but if the generating implementation justice okay just execute a certain number of iterations in my process and then I simply stop but I'm close enough to the estimate I want to have anyway two approximations this really doesn't affect the the estimate as much and there's actually the case so the just a few iterations are typically sufficient to come up with a good with a good estimate and the approximation error that you introduce which comes from this step it's a neglect about compared to the specification that you do okay so just an effect on what's the effect of this parse vacation what you see here on top is the standard result what the Nikkei F would would generate or in general in from extended information filter so you see this what you see here's the graph with either landmark ptosis and this is the robot post and you see the connections and the black ones here are the active landmarks so in the trivial case every landmark is active they are no passive landmarks no em - no n zeros so it's just nothing as sparse anymore so it's exactly the result that the extended information filter or the extended camera for that would produce and this is the result of the side slam algorithm just fixing I think to six active landmarks although you cannot see that this is a constant number but you can see it's much smaller the number of edges you have in here to this guy and so this is the corresponding information matrices it's hard actually actually visually we don't see a difference but the real difference is here now a lot of values which are exactly zero and here values which are small but not exactly zero it's kind of the main difference between them and also if you look to the covariance matrix can see this one is a little bit lighter and brighter I'm not sure if you can see that but that's kind of the difference so you may expect that the difference is not too small but there is definitely a difference in between both estimates because we have done this approximation of sparsa fiying our information matrix so we ignore curl under correction we ignore we assume conditional independence between variables given all the rest that's the approximation that we have done okay if you now compare sidestep to eks them we have roughly constant time algorithm I use this roughly because it's not exactly constant time due to the recovery of the mean births the quadratic complexity for the EKF so this is a great improvement in terms of computational resources that I need to spend for every updates debt so it's not quadratic in the number of landmarks anymore it's constant so it's a really really major win because no matter how big the environment is it works at roughly the same speed and the other one the computational resources that you need to spend cross quadratically which quite quickly which can become intractable quite quickly however we have a less accurate estimation system so we made approximations and they are compared to the EKF so the performance in terms of the the accuracy of our results decreases so as a result of the approximation and we also have adjustable linear memory complexity so we still need to store all landmarks and we have also a with sparse information matrix there is a linear number of elements in there so the memory storage that I need is linear but I only need to update a constant number of those of this of these vectors in sparse matrices so therefore it's constant in every step but I have a linear memory complexity that I need but still better than the quadratic memory complexity that the EKF requires because it needs to store the full dense covariance matrix since I only store store sparse matrix I can do the linear memory complexity compared to quadratic complexity of the of the EKF okay let's see how this changes the computational resources that I need so what you see here so these are the computational resource that need to be spent on the ek F there's a number of landmarks so for in this case 400 landmarks and updates that takes a 1-1 second for the EKF where's the size stays much smaller and stays so if some overhead in the beginning but then it stays roughly constant and will continue like this so of course if this goes grows through the sky quite quickly where this guy stays roughly constant so this is this a big win next thing we can see we can look to the memory usage and again we can at least roughly see the quadratic pattern we can see here versus a linear pattern which we see here because we really just have to store for every new landmark that we add a constant number of dimensions in our information vector mean just one landmark two new dimensions and we have a constant number of elements in my information matrix so there's really a linear growth over here so this will two cases where the clear win for the side slam compared to the EKF slam but if we look to the approximation error that we do we can actually see so this is the EKF the compare to ground truth is the size we can actually see that this that the the size makes an error but this error obviously depends on the number of active features if I set number of active features towards infinity a number of segments I have in the space in the scene does need to be infinity then these curves should should be on top of each other and if you plot the accuracy of the of there's been the commutation time it requires for the which grows with the number of landmark of active features so this is in this case whatever EKF I think the example at 60 features was shown before so this is the update time for the akia node must be more I'm not sure to which data side this correspond to maybe was Swiss Victoria Park datacenter I mean not aware of that but if you look to the number of effective features we can see that up to 10 active features who are still essentially slower substantially faster than the EKF so if he of course this is this simply means all so this curve would grow grow something like this and if I look to the approximation error that I have in my estimate versus the active features you can see if I have a small number of active features like four or five I make really big errors with with its life but as soon as I increase that so kind of whatever from seven to ten the errors really small and it's hard to identify the difference between those so whatever this area seems to be kind of a reasonable compromise between small approximation error and efficient runtime so somewhere along these lines so to sum that up what we what I presented this weekend last week was the sports extended information filter for slam and this is a variant of the extended information filter which allows us to maintain sparse information matrices and due to the sparsity we can't we perform we perform much more efficiently the sparse vacation means that we neglect or ignore direct links in the information matrix so assuming a conditional independence given all the other variables this is definitely an approximation so we lose the quality of our solution decreases but we are much faster than actually computing it and this is in constant time algorithm which is quite remarkable in the context of slam assuming that we have non correspondences if I need to search for correspondences this may change the game a little bit because I may need to look into a larger number of landmarks identify what is the light mark that I could observe to which leg mark of this correspond in my map this can be more costly this depends on the setup that we have so that you whatever may need to maintain efficient lookup structures like KD trees or something like this that introduces additional complexity but we can still can do that actually fast but again the quality is smaller compared to the excellent Kalman filters lab because of the approximation that I do the sigh framework itself is not always directly applicable to other problems I mean if you have another problem which has a very similar structure than the then the slam problem that means you have only a small number of dimensions which are updated in the prediction step and you only have a small number of variables that are observed in by a measurement update because if let's say a sensor with limited range then you can actually apply that but they are it's not kind of less general estimation technique as whatever the EKF or you can do it but then it kind of loses a great performance that it has so it's something to keep in mind it explicitly exploited some of the properties of the flam problem and therefore it was successfully used in slam but at least importantly not a way of a lot of other applications of this framework because we have this special pattern that we say we have only a small number of dimensions which are changed according to the through the update step which is just the position of notice just the position of the robot and I in every point in time observe only a constant or limited number of features more of other variables whatever I'm going to observe for estimate and this situation this technique can be used an effective way any questions about that yes please so we the deletion was just when I said deleted it was something like which I use in my verse to make it easier to graph for you where this goes away is exactly in this marginalization in specification steps and so because I really marginalize it out and then if you module eyes out a variable what is happens is all the neighbors of this variable head direct links become densely connected with each other it's like in a marginalization if you look to a Gaussian Markov random field when you marginalize all the variable you get this elimination cliques of we're all neighbors are connected this exactly the same what happens here this is kind of hidden in this marginalization and conditioning rules here what I said in terms of the ideal leading those links this was kind of an informal way of saying what happens here therefore in the beginning we also had kind of this arrows maybe we're kind of information goes from one element to the other element if you remember that this is something which results from these steps so it's not kind of and dirty hack which does it it's this results automatically from the conditioning and marginalization rules that it for of the gas distribution in information form or in the canonical form and the standard moment form okay yes please what I mean on what is non concentrated so you mean if he automatic if he automatically adapt the size of your active landmarks so of course it's oh no the thing is the bigger that set is the better your on performance so if you shrink it you get worth you may be a little bit faster but anyway every step is a constant time step so at least from the computational complexity point of view you don't win anything but your solution gets worse but of course if you just count the number of instruction that your processor needs to do that will be less if you reduce your set so I would say it's not worth doing it but you call it a little bit worse but you will spend a few less offer you will do a few less operations so I mean crassness do you need that if your processors and exactly at the limit you can do that but ok so ok then I misinterpreted your solution then you want to kind of increase the the set of active line box temporary you could do that yes I can't I cannot tell you how big the impact is on the result because I haven't tried it but yeah it's really hard to estimate how big the effect is and probably depends very much on the exact sensor setup how much information that the measurement provides you compared to other information that you have so I guess it's very very problem and instant specific so it's hard to comment on this in general and I haven't tried it so it can just give an educated guess to you and not not tell you exactly what it would be okay so if you want to look in more detail in what happened here the best resource is actually progressive robotics book on the sparse extent information filter but be aware that quite a couple of inaccuracies in the book so if you read the book open your browser look up the errata page and skim through the different the errors which are in your printing because they have been quite some updates on that so that if you read through you say it's completely different to what's written on our slides I would trust in the current state given the last printing of the good the current printing of the book I would trust the slides much more because some of the errors are fixed if you find any other errors let me know or let the author's know and they will end up at the errata page that's yeah just if you see a discrepancy check the errata page before you complain it's kind of my my advice to you okay anything else at the moment here okay then I'm just
|
SLAM_Course_2013
|
SLAM_Course_06_Unscented_Kalman_Filter_201314_Cyrill_Stachniss.txt
|
okay so I would like to start today with um another kind of uh family member of the calman filter and all its friends um we looked so far into the calman filter and the extended calman filter as two estimation techniques for doing recursive basian filtering in the presence of gum distributions and linear or well linearizable models um what we will look into today is the unscented calman filter which is um kind of an you can see that as an extension of the extended carbon filter um which is mainly designed for situations where the linearization of the extended calman filter so the tailor expansion um is kind of works suboptimally so have to summarize that the cman filter required exactly linear models otherwise the distribution we obtain would not be a gaussian distribution anymore after an update step or a measurement step therefore it was essential to have linear function and the C filter required those linear functions the problem is that most functions in reality in the real world are nonlinear and therefore we had to find the solutions on how we deal with nonlinear functions and one way to do that is the extended commment filter and what the extended commment filter does it it basically takes the um mean estimate that it currently has from the previous Tim stamp and linearizes the nonlinear function around that mean and then obtains a linearized function and then kind of applies the common filter to um update the state and so the question is is there actually a better way to do that so how could you imagine to do that in a smarter way so what you want to do is you have a gaussing distribution you want to squeeze it to some nonlinear function and then you want to approximate it by a gaussian again that's actually what we do what we do is we have our gaussian we linearize our function obtain the mapping of that function and then this is gussan again but there are other ways to do that and there are better ways to do that if we think about the approximation quality of that gum distribution the question is can you imagine a way to do that it's actually quite intuitive let me help you a little bit with the drawing so we have a gussi distribution which may look like this drawn in 2D and we want to squeeze it through some ugly function G um which is nonlinear and then we want to obtain the uh our new function um which is of course not exactly gaussian after was not a g anymore after we use this nonlinear function G but we want to have the best gaussian approximation of the gaussian um transform through G what are ways for doing that so if I tell you nothing about G I just say I give you a function that you can evaluate so you can call G of X and it gives you the transform value how would you realize this uh this mapping so how would you obtain this G distribution what's an intuitive way for you to do that yeah yeah so we could generate sample points which we draw according to this scum distribution so we have our points here what would you do then um take the function yeah so we propagate all these points through G so we have our transform points now lying over here what do we do then we now have our transform points happy with that we maybe not happy because we want to have a gan in the end we try approximate the we just compute the mean and ciance Matrix based on the transform points that's it and that's very close to what the unended Caron filter do the problem is you may need a huge number of samples here to do that and the uncenter transform tells you how to First sample these points in a special way that you can do that with a quite small number of samples you can see that also as an approximation of covering the space densely by samples propagating all those samples and doing exactly what you have told us and the uncentered common filter or uncentered transform um which is used in the uncentered Caron filter is something that does very similar to that so that's exactly what we do we want to use the uncentered transform which is one way for um transforming a gaan distribution through nonlinear function and get an as good as possible approximation of the uh gusin approximation of the transformed function and using this uncentered transform in the calman filter for dealing with nonlinear motion and a nonlinear observation model is that what is often referred to as the centered Caron filter okay so as you said before what the EKF or the tailor approximation which is used in the EKF does it takes the current mean so this guy just a single point the current mean and use this as the um linearization point for linearizing the nonlinear function G and then um performs has has a linearized function G and then can map this gaussian um distribution using using the linear function but the important thing is we kind of just used one single point and this just one single point is suboptimal if if we let's say have points here in the area of the let's say 3 Sigma ellipse if the function is highly nonlinear in this in this local region over here then this is a suboptimal strategy so what the unscented transform does it generates or tells us how to generate so-called Sigma points this is a deterministic technique to draw them so given a mean and a CO variance there's kind of standardized way to do that so we obtain those samples and then it does exactly what you told us before so it uses the nonlinear function and Maps those Point through the nonlinear function and then uses these points to reconstruct a gussan distribution there a little bit more than that so it does not only use points it also assigns an weight to each point so these are weighted points so every Sigma Point has a weight and this weight is simply used in the Reconstruction so to summarize everything that the unended transform does it is first computes the sigma points then each Sigma point also gets the weight which is computed in a certain way and then the sigma points are transformed through our nonlinear function which we don't need to linearize that's kind of the big advantage in here and then we kind of estimate again gaum distribution from the transform points and that's kind of our approximation of the original gaussian distribution propagated through the nonlinear function getting the as good as possible um gaussian approximation of this outcome of course if you transform a gaussian through a nonlinear function the result will not be gaussian anymore but we want to get the best possible gaussian approximation of that because if you would leave the gaussian world the whole Caron filter framework would not be applicable anymore so that is an approximation so by just taking those points like we had here on the Blackboard by taking those points transforming them to G the points are not gusly distributed anymore but we are trying to find a good gaussian approximation for these points it's important to know that approx it's an approximation that's nothing which is exact and the the main advantage is that we don't only use the mean for the linearization as tailor expansion or extended common filter does but We additionally take points into account which are further away from the mean and this hopefully typically does gives us a better approximation of the uh function we're going to approximate so the key question is how should we choose our um Sigma points and how should we choose our weights what are our constraints for doing that if I give you the task of doing this so can you have to provide me um strategy for generating those samples and for coming up with weights and so one of one what are kind of the global constraints that we have that we want these points to satisfy well they have to be inside the current distribution they have to be sampled or drawn not sampled because I said we do that in a deterministic way but we need to have given a mean and aarian function actually only a coari function because the mean shouldn't matter um everything should be relative to the mean um how how to select those points that's absolutely right there should be the closer they're in there the better it approximates the let's say largest amount of the probability Mass because the more we are in the center the higher the probability mass is there but what are kind of from a more General perspective the uh the requirements we have through these Sigma points so what the easiest function you can imagine for doing such a transformation what's the most trivial function for choosing these points no not for choosing these points for the transformation the identity okay so let's start with the most simpl fact that we have yes points which at least can reconstruct the first exactly so if we generate those points then apply the identity which me do nothing then we reconstruct the gaum distribution when to come up with the same mean and the same Co variance if this is not given it's likely to to be flawed okay and this gives us the first constraints that we want to have so we want to choose those Sigma points XI so these are the individual Sigma points and we choose a certain amount and weights for the sigma points so that the weights sum up to one and that if we reconstruct the mean from these weighted uh points we should come we should obtain the original mean that we had and the same for the covariance Matrix so if you reconstruct um from these sample points a CO variance Matrix it should give us exactly the coar Matrix we had before if this wouldn't be the case and you would apply the uncenter transform to Identity uh your system would diverge without doing anything so this is kind of a requirement that we have right this this clear to everyone okay perfect the only thing which we need to mention is that there's no unique solution to that to the how to select the sigma points and the weights so there are several ways for doing this so that they all fulfill this constraint this is nothing which is dramatically bad for us the only thing um which should be noted is there's kind of a space of solutions to do do that so there's no one single best strategy on how to select the weights and how to select the the sigma points but there's one way which I will show today and you typically have some free parameters in there like how to set the weights given a certain Subspace and you can then tweak those parameters um if you have certain background knowledge about your function so if you want to cover some for example higher order moments so typical dist G distribution has two moments the mean is the one moment and the um cence Matrix is another moment and there are additional moments which are all zero for the gaussing distribution if we have some additional knowledge we may optimize those parameters to take into account some background knowledge but if we don't know anything um we don't really need to care about those parameters I will in the end provide one way on how to select them if there are several parameters how do they depend on each other and what does it kind of mean if you set a certain parameter but you have the freedom of selecting parameters okay so the question is how do we choose the sigma points and I will now tell you how to do that and also kind of provide you with intuitive at least give you an intuitive explanation why this may make sense so the first Sigma point we choose it's our mean mean is kind of the the best thing we did before the tailor approximation linearize around the mean so the mean is in there and the center Sigma points um the number of Sigma points you choose depends on the dimensionality of your um of your distribution and for every Dimension you have you choose two additional Sigma points so if you have a onedimensional gaussian you would choose three sigma points the mean and two time one dimension if you have a two-dimensional um gaussian you would choose five Sigma points the mean plus 2 * 2 and they are chosen in a way that you have so they they are parameterized relative to the mean so the first thing is you start with the mean and the second thing you have the mean plus some term and the mean minus some term so you will have if you have a mean you have something in the X Direction plus and minus in the X Direction so they're kind of centered around the mean in the individual Dimensions that also makes sense if you want to reconstruct um the mean from that you have always two points which can cancel out each other if they transformed through the identity and then you will end up with your mean point in the end which makes sense okay so how are they chosen so um this term over here with this index I here is a column Vector of a matrix so the guy in there is a matrix the two expression in Brackets and this is kind of the I column Vector of this uh of this Matrix and it's a square root of something and this something consists of three terms the the term N is a DI dimensionality of our problem so do we have a onedimensional gaussian distribution a two dimensional gaussian distribution and so on and the term Lambda here is a scaling parameter that's one of those parameters which I can set and this kind of tells me how far do I want to move away from the mean along a certain direction the bigger this Lambda values the further I will move away from the mean the um smaller this values the closer I will I will go come to the mean and then there is um the um covariance Matrix here it's actually this The Matrix square root of the covariance Matrix so what is the Matrix square root um the matric square root there actually several ways for defining it um so if you want to compute the square root of Sigma and you can write Sigma as to Matrix S * s then s is the square root of this Matrix you sometimes find a different definition you can actually use both of them it's kind of it's S as transposed and then one of them is Matrix square root the reason why we choose this is this has some numerical advantages in strongly related to the ches de composition that you may know from linear algebra so how do we compute that it's kind of the first um question that we might have how do we actually obtain this Matrix over here we can do that um actually quite intuitive way so as our um Matrix Sigma is a coar matrix it's positive semi-definite it contains real numbers and every um symmetric Matrix with um real elements is Diagon dioniz that means we can rewrite our Matrix Sigma as a product of a matrix V times a matrix D and this Matrix D has diagonal form so only elements on the main diagonal um times um V to the power of minus one so actually kind of looks like this over here this expression over here this something we can do um we can diagonalize this expression and um so who of you know something about IG vectors and IG values has heard that okay that's good at least some of you um that's actually pretty related to um setting up igen vectors and IG values because those elements along the main diagonal here are actually IG values of this um of this Matrix here so if we think about just an expression for those of you who may not know what an IG value IG Vector is for the if you look to a g distribtion as we go to 2D and we have here our um our ellipse um the igen vectors are the vectors which go along the main diagonal so this is the the first uh main axis of the uh ellipse and this is the second main axis of the ellipse so these are kind of two IG vectors the two IG vectors of this 2D gussan and to every IG Vector we have an IG value Associated and the the ratio of the IG values tells me how much bigger um so kind of what's the scaling between this AIS and this AIS so if the both I values are the same value we have a circle which then in this case kind of the ratio between this length and this length is one and if this one is kind of twice as big I would have an igen value which is twice as big as the other IG value this kind of kind of gives you an intuitive explanation how this looks in the gaussian framework and you may agree that may also make sense that you may choose your Sigma points for example along these main axis it's actually one way to do that so you you cover the um both uh parts of that gaussian um as good as you can okay okay we were at the point where we wanted to compute the um The Matrix square root and what we know is that we can write our Matrix our cence Matrix in the form v d v to the^ of minus one and D contains um only elements on the main so I can rewrite that okay we'll use the last term over here on that slide I can keep V and V to the^ of minus one and rewrite this Matrix D in the middle as a product of two matrices where on the main diagonal there are the square roots of the individual values because if I multiply those two M these two matrices I exactly get this Matrix and this works because they are just elements in the main diagonal therefore I can do that that easily otherwise I couldn't do that okay and then I can Define say I tell you that the Matrix square root of this guy is v d half which is D half should or should actually correspond to this Matrix over here it's call this D to the or D Prime whatever you want just to not confuse it with the square root itself perhaps it's better or it's called D Bar doesn't matter and then V to the^ of minus one and this is the Matrix square root you can actually see why if you have S Time s * s then you would end up with V D Bar V to the^ of minus1 times this was S 2 s v d bar V to the^ of - One V time V to the^ of-1 * V this gives identity so this here gives an identity so we have V D Bar D Bar V to the^ of minus one and this exactly equals D so this is v d v to the^ of minus1 and this is exactly our um cence Matrix so if we use any algorithm we have for computing for this diagonalization then we take the diagonal matrix here the square of the individual elements and we have all Matrix so one way to do that there okay so the derivation is also here for you on the on the slide so you don't have to copy it from the Blackboard um there's no further information on what I've written down here exactly and there's an alternative way to do that and this is um sometimes called this a cheski matrix square root um there the difference is that you don't require LL or SS here but LL transposed it works exactly in the same way because um the this these Matrix here have exactly the same I vectors as um our Matrix down here so you actually move along the uh the same I vectors in the end if you do that um the result why people use this is because there's actually numerically more uh more stable to use this solution and therefore in most practical implementations of the UKF you actually find people using the cheski factorization uh to to compute LL transposed which is a result of M cheski and then L gives you exactly this Matrix which you use as the Matrix square root so whenever you implement that um that's a way to go so we know how actually now to compute this guy over here and then we may obtain our our Sigma points so this an example for those Sigma points so the um red ellipse is our original covariance Matrix Sigma and then we have Sigma square root which is the um green curve over here and then these are the sigma points we have chosen you can see here that in this case so Sigma points do not necessarily have to be aligned with the main axises can be the case but um so in this case it's actually not at all the case but um so it doesn't have to be the case that they they aligned with the main axis but depending on the property of the Matrix that you have Um this can be the case but you will always find the pattern that you have this is the mean you have something a vector in plus and minus direction for every Dimension okay so let's look to the weights that we are going to obtain this is a way to set the weights and if you do it in this way it actually um the property that we defined in the beginning that we want to be able to recover uh recover our mean and Co variance is given but again there are three parameters in there which you can choose and this is again our scaling parameter that we had before and um the dimensionality n and we have actually two different weight values here we have one for the one weight for the mean for the zeros um Sigma point was the mean and also for reconstructing the covariant so the M here always is exactly for for the mean and the C is for the for the covariance Matrix um so the Zer I the zero Sigma Point has a special weight and all the weights for all others are the same and are given by this equation over here again Alpha and beta are different parameters the reason why they are giving this optimal way you could say just replace this by one term yes you could do that but um the I don't want to go into the details why they and provid in that way that's a standard way to do that because it allows you if you have additional information about the about your your nonlinear function that you may want to cover some of the higher order moments you can actually do this with this Alpha and beta terms so they are there's kind of a reason why they are given in this way not this guy here for example is given by just one parameter but are two parameters okay exactly and these are so we have now three free parameters that we can choose this the reason for this why we why there are is more than one parameter set because there's no unique solution to that problem of um how to set the weights and sigma points in order to recover the gaussian moments so the mean in The covariance Matrix okay so how do we so we have now our points so we can map our points through our nonlinear function G and then we are we should be able to cover our transform mean and our transformed covariance Matrix and this is done exactly in the way you would assume this happens so you have your 2 N plus one Sigma points you sum over them the weight of that Sigma points times G of that Sigma point so you map your individual Sigma Point through a nonlinear function G multiply with the weight and sum of all of them you do that for the mean and for the covariance uh Matrix in exactly the same way and then we have the um mu Prime and sigma Prime are now the parameters of the gaussian approximation of the transport gaussian according to the uncenter transform okay this is an example how that look like looks like so you know these plots from the EKF down here we have our Gan distribution which was our input distribution this is our nonlinear function so um we we compute our Sigma points so this kind of so we propagate our Sigma points through this function and then um compute the um recover the parameters of the transformed gaussion exactly as we we discussed before what you see here is so this term over here gives you what the exact distribution would look like we had that before so if I really transform this gaussian through this nonlinear function that's actually the function I will I will obtain but again we said we want to have the G the best or the best possible gaussian approximation of that and um so what you actually obtain is this dash line over here is kind of the function that we obtain through this um unscented transform the real mean and the real Co variance that you would compute from this distribution is actually the the um the non dashed line so it is different but actually not too far away from each other and this actually this approximation is better than the approximation that the EKF would give you but again it is an approximation so just two more examples so this was our initial um distribution so we compute our Sigma points these are our Sigma points here then we transform the sigma points so the function here is a linear function just uh shift um by uh one in X and one and Y we shift our points here then recover our gaum distribution and then we obtain the black result so this is exactly the same what the uh this example on the left on the left hand side here what the sorry what the um the linearization would give us because this function is already linear so it's kind of then nothing is lost in this case or if we take whatever any random nonlinear function then the sigma points are kind of mapped here and then this is a function that we are going to recover from that and in this case um the result we obtain is typically substantially better than what the linearization would actually give us kind of depends how different or how much that the the nonlinear function um is away from linear approximation in the area in which you select the sigma points the sigma points are selected according to the parameters that reset so if they all go very very close to the mean will be very similar to the linearization to what the linearized function would do if they go further away then you're better if you're too far away um then it can actually get worth again because you are kind of away from in the you're not anymore in the kind of meaningful area of your of your distribution okay just to summarize this this everything which is here was was on the on these slides before this was how to select the sigma points this is a strategy for how to select the weights and then we are ready to go and now just one single slide on how to select the parameters so this is a parameter suggestion which comes or not suggestion the the parameters according to the scaled uncentered transform the scaled uncentered transform is a generalization of the uncentered transform which has some nice nice properties and I use the parameterization here as it is also used for the um in the scale unset of transform so the alpha value that we've seen before is a value between and one so it can can take one but shouldn't take uh zero so between the open interval here between 0 and one if we have Gans Gan distributions that we put in the beta equals 2 is is what we are going what we're going to have if our function has other properties uh the the El leits sorry the transformation leads to higher order moments this parameter B so not B2 you can adjust that to other values but I don't I don't want to go into the details in how to kind of tweak the parameters because actually it's also not too trivial and I don't also don't have that much experience with tweaking those parameters and then we have our our term Lambda the Lambda again depends on this value Alpha it depends on n and the value Capa you can choose any value bigger or equal to zero n is a dimensionality so it's kind of a strategy with which you can set all your parameters that you need to set and then you can apply the unset transform now just some examples to give you a little bit of an intuition what happens if you change these these individual parameters so if we on this slide I said um Capa to three and just varied the um value for Alpha so if Alpha is very small all those points are in this plot indistinguishable or nearly indistinguishable from the mean so the smaller L Alpha gets the closer those points move towards the mean and the larger Lambda gets the further those points move away from the mean see 0.01 0.1 0.25 075 so they move outside okay we can also and a similar effect is given if we kind of fix um Alpha and simply vary the value of Kappa so they both work in a similar way the higher Kappa get the more those points actually move to the outside or even far outside your the two Sigma bounds which are actually plotted here but again these are free parameters and you can choose some of the parameters you can choose freely with we'll get the same results and some parameters you can optimize if you have some background knowledge about the function that does the mapping with function G but that's kind of don't want to go into the details of setting these parameters okay so what we've seen so far is the uncentered transform so just kind of this is the basic that we need in order to derive the uncenter common filter and what I would like to do now is to use the um Extended common filter algorithm that we had before and imple and kind of change it so that we end up with the extended Caron filter so as a reminder that's what uh our extended Caron filter looks like I hope you still remember that and what I want to do now is kind of change this algorithm so that we end up with the unsed Caron filter and we'll start with the prediction step modifying the prediction step in order to um have not used the tailor uh approximation or linearization by the tailor expansion but the uncentered transform in order to come up with um uh was using theet transform to compute the predicted mean and the predicted uh VAR I so we try to fix this kind of get rid of the extended common filter go to the unended common filter and the first thing we now need to do is actually replace those two lines line two and line three of the extended common filter with the line that we need for the extended common filter and I would like you to do that now together with me on the Blackboard so what do we need to do in order to realize those two steps you can cheat if you look up the slides on the web but you can also use your brain to do that yeah so first we need to Define our Sigma points kind of the first step that we're going to do so we say we have our mean at tus1 and then our mean at T -1- some whatever gamma * theun of t - - 1 sorry plus minus one and the same or let's say plus minus then we have our Sigma points what do we need to do nextar function mhm so we let's say take y t is a set of points where every individual point or we can actually do the directly the mean um save save a little bit of space so we want to compute then directly the mean t the predicted mean this guy over here is given by the sum from i = 0 to 2 N and exactly summing over the weighted transform points so our weights and um g x i it's written like this and then we have our transformed um mean what do we do with the um with the uh Co variance Matrix exactly so what what do we need to do here how do we computer variance Matrix again from 0 to 2 N okay well wait how we going to do that yeah X IUS the mean the transform point the transform all right yeah exactly minus the mean predicted mean times that yeah um transposed am I done so that's what the unscented transform tells us we're done with the unended transform but there's something missing in the Cal filter step if we execute a prediction step we increase our uncertainty in which way do we increase our uncertainty or why do we increase our uncertainty yeah plus our RT which was motion noise because we expand our um The covariance Matrix by this term sorry question suing up to 2 N not 2 n + one U because we start from zero I equals zero to to 2 N and then you have n 2 N plus one but we take into account all Sigma points and that's exactly um what we written down here the only difference is these points have been trans transformed so we kind of move this guy into uh directly in here but of course if you would Implement that you wouldn't execute the three times the transformation you just would precompute it once and then do that that's exctly the pre the prediction step of the extended um of the unsed common filter okay then the next is use the sigma Point uh the uncenter transforms us the idea of using Sigma points and um the the weights to do the same thing for the observation and to compute uh the colon gain you can do that exactly in the same way or at least the beginning can be done exactly the same way so what do we need to do just don't write it down but at least I want to hear it again so we had our um so we have to again generate new Sigma points based on our um predicted belief and we need to do that because we need to um approximate the nonlinear observation model which maps from the um State x to the space of observations so what we do is we compute new Sigma points for the predicted belief and use these Sigma points to transform it with the function H not with the function G function H Maps into the space of observations so we obtain kind of points in the in the space of observations and then um recover the uncertainty and um the mean for this transformed observation then we compute the common gain based on that and we continue exactly in the way we would do it with the extended common filter if you write that down in an algorithmic way you again generate your set of Sigma points now from the predicted belief at time T so the best thing we have right now the prediction and then we compute these transform points by taking the predicted uh the predicted the points from the predicted belief the sigma points propagated through H and then compute a predicted observation that's kind of the the the mean of the observation and the iated uncertainty that we are going to expect and this again is the uncertainty that we had here from uh the unted transform plus the uncertainty that we have in our observations the only thing which now slightly changed is how we compute the calman gain it's actually not different it's exactly the same what the EKF does but the formula looks a little bit different and the reason for this is that this term s that we compute here from these points was not directly a matrix we had in the EKF it was the part which consists of the um the Jacobian of age times the um uncertainty the predicted uncertainty times the Jacobian transpose but we don't have that Jacobian anymore so we don't have this Jacobian AG because um we don't want to linearize this function therefore this looks a little bit different but it can actually show that it turns out to be exactly the same um just using the the unended transform so this this Matrix S corresponds so this is a line coming from the from the cman filter equation is this term over here and so this term over here this H Sigma bar um H transposed plus QT is exactly what this expression gives us so this turns into this term this is St and we can by relating the the the state and the observations um we exactly obtain this term over here so This this term is computed according to this this guy the second part is this s and then the calman gain is simply given by this Matrix times this Matrix inverted and we end up exactly with the same term that we have here so the the matrices which are written down are kind of different different name but in the end um we obtained exactly the same thing so then we have the calman gain and then we can continue um once we have the calman gain we compute the mean exactly in the same way we compute the covariance again not exactly in the same way just because everything is done now with this with this Matrix S so if you take this line over here and compare to what the EKF does um this was the line which was in the EKF which contained the um the calman gain and again the um jopan of the measurement function and then we can do some derivations end up exactly in what is written here if you don't believe that that's kind of in in big that you can also in the last line see and those guys watching it at home by the video um so we have so this is a line which we have in the um Extended Kon filter this was kind of how to compute the final um ciance Matrix so if we multiply that in so we have this identity replaced by the um predicted uh uncertainty and we have the term here then these two terms exactly give me this Matrix of the Cross correlations between X and that um then I can add in here s the^ of min-1 * s so this is the identity Matrix I can just multiply that in here um what's done next this guy over here is exactly the cman gain according to the definition we had before in the slide so we end up with the cman gain and then then we get rid of the transposed which is here in the brackets obtain this equation and then we are there so that's kind of this is just a different way for writing that because we don't have this H we have only what we computed is only H * Sigma bar and therefore we uh this exactly we we end up with this solution because all the terms have been computed already but there's kind of there's no black magic behind this behind this this calculation it's just a different way of writing it because we have we we have other VAR computed on the way towards the solution and we just use these other matrices to do exactly the same calculations okay if we compare the result of the um um uncentered Caron filter with the um with the extended Caron filter again this is our nonlinear function this is our input Gan distribution we map it through this nonlinear function and what you see here on the left is the result of the EKF and that's the result of the UKF again this function is exactly the same the true mean and the true gaussian these are kind of the non- dash lines are exactly the same um because this is just a property of the nonlinear function not of the UKF or EKF realization but the dash lines are the outputs of the um UKF versus the um EKF you can see here that the the predicted mean the mean of the EKF is further away from the true mean compared to the UKF and this dash line is also closer to this distribution compared to this dash line over here which is the transformed gussian so the the transformation of the gaussian under in the EKF is worse from the approximation than what the UKF does if we again show this with a smaller Co variance so we had the effect if you remember that if the the the the variance in the input G in the input distribution is smaller the effect of the linearization is typically not as bad because the the the probability mass is closer to the linearization point in the EKF and then you can also see this effect that this is again the EKF which is much closer to the to the true distribution but again the UKF does a little bit better in this case I mean I'm not sure if you see that in the back but um from the back but this is kind of a little bit closer than what the EKF estimates so um again there's an advantage of using the UKF because depending on the nonlinear function so the more nonlinear your function is or the the the worthy approximation through a single linearization is for the um for your model the better the UKF will perform compar compared to the EKF and just one example where you can see a difference for um for the motion of the mobile robot if you look to the banana shape distribution that you obtain if you propagate your motion according to the standard odometry of velocity based motion model you kind of get this banana shaped distribution that's what the real distribution looks like if you do the EKF approximation of that there it's actually what you're going to obtain so what you see here there are areas which have signifant sign ific probability Mass which are not covered or badly covered not but badly covered by the by this approximation if you compare that to the UKF approximation you can see that the uncertainty is bigger in uh in this in this Dimension over here um this covered this better than the EKF that's kind of this is one of the typical situations that you see if you use the EKF versus the UKF for modeling the motion of a mobile robot of course the more noisy the motion of the robot is especially in the in the heading the more uh the bigger the difference if it has a very low error in the heading um this effect is probably neglectable and this is just one example I took from from a book um so just again a comparison between what the UKF does and what the EKF does so this is my original gum distribution I can sample a a lot of points according to the gaussian distribution it's kind of covering very very densely with samples I have some nonlinear function f i map all those points and kind of the then the the points are kind of uh distributed according to this function over here and this is kind of the true mean and the true co variance which are computed from whatever thousands of points so it's not ground truths but as close as possible to ground Truth uh not as close as possible you could take more points you're closer but what's something just regard this ground truth here if you would perform the EKF and so you just compute the for the mean you linearize the function and then propagate All Points through this function you would actually end up with this um pink uh coari uh ellipse uncertainty ellipse and mean over here you can here see that actually substantial difference to what the true distribution actually looks like if you use the EKF so sampled kind of four Sigma points over here transfor those Sigma points then you actually get the green gussan estimate over here again it is not the true estimate but it's Clos to the true estimate just by visual inspection compared to the result that the EKF actually provides so if you have highly nonlinear functions the UKF can be very interesting alternative to the EKF because it has better mean for um uh for computing for for for computing the gaussian approximation um of the gaussian propagated through a nonlinear function so let so summarize this a little bit up so the UN transform is just an alternative to the linearization and it's typically better than the uh tailor expansion what so what the EKF does and um and kind of the the advantages of the of the uncentered transform is for highly nonlinear functions so instead of having one linearization point which is the mean the uncentered transforms adds additional points so kind of the uh the the sigma Point number zero is or is the mean and then it adds additional points to that and propagates the uh these points through the nonlinear function and obtains the gussian estimate they are three parameters in the uncentered transform and because there's no unique solution for how to set the weights and how to set the um Sigma points and we can use the uncenter trans form in the prediction step and in the correction step of the EKF and then we obtain something which is typically called the uncentered common filter um UKF if you compare the results um of the UKF and the EKF they give you the same results for linear models they may be numerical issues so you're not free of numerical issues always but um if you have high Precision in what you can compute they you should get the same result if you run the the EKF and the UKF four linear models so four linear models we should have get the same results for nonlinear models the UKF is is a better approximation than the EKF however most people say it was reasonably engineered robots that means that the uncertainty is not extremely large in the motion it's not extremely large in the observation this difference is somewhat small so can see this but it really depends on your application so if you have a really really crappy uh motion system of a robot it may be worse to to use the unscented um common filter but if you have a decently designed system the B the difference to the EKF is really small um one advantage of the UKF except that it's uh is that you don't need to compute the jacobians that if you're kind of a little bit lazy don't want to compute the jacobians or additional source of error um you can skip that with the UKF um they're in the same complexity glass but the UKF is typically a little bit slower than the EKF if if you really do an implementation because you need to transform all those points so in practice UKF is a little bit slower but the same uh same complexity glass but again we are still restricted to Gan distributions the only thing that we do we kind of get a better approximation of the underlying distribution through a gaussian if we use the uncenter transform except compared to the um the tailor expansion for the linearization if you want to know more about that that's chapter 3.4 in the probabalistic robotics book which um something which follows directly after the EKF in the book which as it was done here explains what we have done here the notation should be very similar to what I have shown here um there are two other books or other articles not books articles which I um found um nice it's kind of the new extension of the calman filter to nonlinear system that by Julia and ulman Julia is actually one of the guys behind the EKF also a technique which is say reasonably new so it's less than 20 years old um compared to the kman filter which dates back into the 50s and there's also um I think it's a script of another lecture on damet um you find on the web um in these four pages it's in German but for those of you who may prefer German literature I found it actually worth reading it was kind of well um readable as well okay so that's it from my side for the UKF and we will um now look into
|
SLAM_Course_2013
|
SLAM_Course_21_Short_Summary_201314_Cyrill_Stachniss.txt
|
okay so this is again a very short summary it just should kind of wrap up a little bit what we've done but in a very very short whatever 10 minutes from now on so um what you should have learned in the course or things if you don't know anything if you don't know one of the things i'm talking here about today that means you should really really revisit um your the course material although knowing only what's in here is definitely not sufficient because it's a very high level overview so what is slam we define slime in the beginning is we want to estimate the trajectory of the robot or the path that the robot took or the sensor took doing data acquisition as well as the map of an environment of our environment so the path and the map given my sense observations and given the controls that the platform actually executed and we've learned different variants what a map could look like it could be a set of landmarks 3d poses in the environment it can be a dense grid map so we looked at different aspects on how we can actually represent the world and this in combination with the trajectory of the robot which can be in 1d can be a 3d can be in six degree of freedom um it just depends on the platform that we're using and the assumption that we do we're interested in a distribution so in a probability distribution because typically the best match is not always the best match so we're interested in the probability distribution given our observation that given our control so we want to exploit that and slam is often referred to as kind of a chicken or act problem because if we know the poses the estimating the map is easy and if you know the map estimating the poses is easy because one is kind of um just mapping with known poses and the other one is just kind of localization which are in itself easier problems than solving the false lamp problem if both quantities are unknown it's actually more difficult to do that so we mainly looked into three paradigms and should be aware of all the three paradigms with all age advantages and disadvantages the first thing was the common filter in its fam and its friend so it was a common filter was the extended kalman filter was the uncertain kalman filter was the sparse extended information filter was the extended information filter also the extended information field that we touched only very very short briefly um and then we looked into particle filter applications so how can we solve the slam problem with the particle filter and they were mainly the fast limb approach for landmarks and kind of the grid-based variant of fast limb or this was called grid-based raw blackberries particle filter for mapping which kind of the two of the prominent solutions out of this particle filter group of approaches which have been used to address the slam problem in a couple of different systems and finally last but definitely not least the graph based approaches which we present in the end which is motivated by reducing the sum of the squared errors which can be shown to be equivalent to finding the mode of my gauss-seidel over gaussian distribution um it has several or there are some nice advantages of the graph based framework in terms of the flexibility that it gives you in terms of linearization or relinearization so that the graph based approaches are the most frequently or the most popular used ones at least from the let's say more modern systems proposed after 2006 perhaps before that comet filter was kind of the dominant uh system which was used but the graph based approaches is a system that is very very frequently used in the context of slam we typically distinguish two different variants of slam this one one with full slam where we estimate the whole trajectory of the platform and the map of the environment compared to the online slam problem where we said we only interested in the current or the last pose of the robot because we we're not interested in the previous poses so we marginalized all the previous poses and we're only interested in the map and the latest the last post that we have so in the graph based approach we always estimated this full trajectory in the kalman filter based approach we typically stick with this one because we only had the the current pose of the robot in our state backdoor so what you should have learned in that course shouldn't have idea what the slam problem is that's i think hopefully the case for everyone here you should know how to build a landmark map of landmarks and um also good maps so if at least give you known poses and a data set with observations you should be able to generate a nice looking grid map with a nice looking light mark based map so it's just mapping with known poses was kind of the very elementary part and then we looked or then it was kind of a different order chronologically we started with the ekf framework you should have a pretty good idea what the ekf does because ekf is kind of the first successful system which was used in slime a lot of systems especially older systems still use the ekf framework it's very widely used and it's kind of understanding what the particle filter does which assumption it does what its limitations are is kind of a really really important part of understanding um also the motivation for a lot of approaches that have been presented afterwards because there's a huge number of papers just trying to find solutions to what the problem that the ekf has then we looked into the sparse extent information filter so you should get an idea what is what about the key advantage why did we move from the ekf to the sports extended information filter the constant time algorithm or nearly constant time algorithm which was a huge gain compared to quadratic complexity that the kalman filter has per iteration for our large for in landmarks because we have to maintain the full matrix over here then we look at the particle filter based approaches so you should know what a particle filter is you should get the idea of the particle filter at three key steps so drawing samples from the proposal completing the weight and the resampling step should be things that you are aware of and you know which things are exploited in order to apply that to the slam problem what was the problem equalization so how do we split up the estimate about the trajectory of the robot from the map of the environment in order to do that efficiently that was kind of the key ingredient to make the particle filter work in the context of slam and then we look into the graph based approaches which we just did after the christmas break here and discuss kind of the key elements how to build maps how to do it with landmarks how to build how to deal with outliers how to arrange them in hierarchical fashion to become faster and we also very briefly looked into a front end the other thing that i hope that you learned is some hands-on experience at least if you have followed the exercises that reina and fabrizio did here during the term you should have the ability to code a standard slam system definitely the back end actually should be able to do code and ekf you should be able to code a particle filter and you should be able to code a graph based lab system at least if you have done the exercises you should have experienced let's say at least key aspects of those systems of course there was a framework given to make your life easier but really my hope is if you sit down and say you want to try something and you want to let's say you have a good idea and you want to modify the graph based slam system to get this nice idea and you should be able to implement that that was actually one of the uh one of the targets that i set for myself to that course and the next thing which we haven't tried yet is actually understand a typical or average land paper from a conference if you look to any robotics large robotics conference you will see several tracks so i mean they have several sessions it means multiple authors put together talking about only about slam multiple sessions of those so there are a couple of slam papers you find in any large conference and my goal is that you actually understand most of them definitely not all of them i also don't understand all of them but there there should be a really large number of papers that you understand or at least you get 90 of the paper maybe some comment here some comment there it's always the case that you may not understand that or one doesn't understand that including myself of course um and but you really should be able to get the key idea what do the authors want to communicate and therefore a lot of the course material issues on the website are actually papers on which i reported here and you should be able to understand basically all of them which are on the website that was at least my goal to so if you want to continue working in this context in the context of slime you should be actually well prepared to do that because you know what's what has been presented so even presented approaches here in the course which have been developed last year have been published last year there's a lot of up-to-date material in there and i try to actually give you a good overview on what happened here um so a comparison of all of the different approaches so don't i'm not a really big fan of saying oh this one is much better than the other one because it always strongly depends on the application that you're using um on the assumption that you're that you make on your platform um on the whatever in which way can you engineer your environment if you can bring out the perfect features in the environment because it helps the application there's no reason to not do that but at least kind of a little bit of a comparison on what different systems can do i would like to do here and i would like to do that together with you on the blackboard because then you probably remember it better so we had the common filter approach we had the ekf we had scythe we had the particle filter and we have the graph based approaches oh that's a gaussian upgrade and then we have here computational complexity we have the assumptions about the distribution with the kind of obvious terms then what about what is about the linearization is this problematic or not in terms of flexibility so how flexible is this system actually flex so how easy can i let's say change a constraint if i later on realize oh i made a mistake in the beginning i can actually get rid of this constraint or fix things and how well are those systems for large scale slam large scale slab okay i would like to do discussions with you now what what was the computational complexity of ekf per iteration and are the number of landmarks what was the complexity to deal with no and three in general i agree with but not for the slam problem because the issue was we need to do an inversion which is complexity of two to the power of four approximately um which is cubic which is part of a cubic complexity class but um the issues we need that only if we can our observations allow us to observe all features at the same point in time which doesn't hold for our scale problems because we're limited sensor range okay let's say we have n landmarks how many operations do i need in the worst case per iteration okay we have a quadratic complexity um i have my covariance matrix which i need to represent and i do all the operation with the covariance matrix so we typically end up with a quadratic complexity what is the assumed distribution of the kalman filter what kind of distributions can it represent yeah perfect at least so purely gaussian what about linearization or linearity assumptions yeah everything all linear so it's not even linearization it assumes everything is linear and if it's not linear it can't be applied how would you rate in terms of flexibility how flexible is such a system how easy can you change things this is a weak measure so you can just communicate what you think a little bit sorry i'm not that flexible i would agree with that in which area would see some flexibility if it's not too flexible i mean there's some flexibility in their way would you see that this is definitely a statement so it's more designed for the landmark case where you estimate the position of the landmark in terms of flexibility one of the things it also allows you to nicely encapsulate or the front end from the back end because you only get a gaussian estimate in there so whatever your sensor is as long as your sensor gets out of gaussian you can fit it in there without needing to change your back end which is actually kind of nice so i gave it a medium medium flexibility it's kind of what about large scale mapping yeah so that's more on the minus side that's kind of it doesn't really scare okay let's go to the ekf what happens with ekf exactly the same as done as was here what about the assumed distribution again gaussian distribution yes what about linearization are we do we linearize the dkf how can we deal with nonlinear functions in the ekf yes in which way exactly by linearization often the wheel in your eyes how often do we linearize yes it's true but so every function is linearized only once so we get the observation we once linearize and then we stick with linearization point forever because we integrate into the system so we linearize once so if you are far away from a good solution that linearization point may not be good in terms of flexibility yeah i could give it simply the same score of course the linearization helps us a little bit but it's not dramatically different in terms of large scale with exactly the same issues as um as with the kalman filter same complexity okay let's go to the sports extended information filter what was the complexity of the sparse information filter in terms of the number of landmarks yes constant so that's really cool constant time that's a big a big win um what in terms of the underlying distributions that i used yes again guys linearization exactly is exactly the same with the ekf once flexibility why why because it's easier it's easier it's really easier to implement and easier to use yeah you'll be the first one who says that in which way what's what you can do with what you can't do with ekf because you have to consider how to sports your matrix that's why it's harder to implement yes but i mean in terms of flexibility i just want to go back to your flexibility statement why is it more flexible than the ekf because it's constant complexity does it make it more flexible it's faster i completely agree with that and psi would even give it a very similar score than the the ekf so i wouldn't change that too much so large scale mapping yes yes it's really good because it's a constant time algorithm whatever time number of landmarks i have it takes the same amount of time that's really big plus it has a certain minus so why right so the absolute super brilliant thing is it still suffers from the fact that you linearize only once and if you have if you add more and more nodes and the thing becomes non-linear you're likely to diverge as well because the linearization assumption doesn't hold so if you go to really large scale maps there's a limiting factor which results from the fact that you linearize only once okay let's look to the particle filter let's do only landmarks here because when we let's say with ore what is the complexity of the particle filter if you do the right implementation not the trivial one so we're talking about a computational complexity not memory complexity so it was linear in the number of particles and logarithmic in the number of landmarks because you need just to go to this special tree structure which you can do in an efficient way um what are the assumptions about the underlying distribution so exactly so i don't make any assumption unless that i can represent it with particles for the pose pose any for the landmark given the poses i assume to be gaussian landmark given poses was gauss at least in the um for the fastlane uh what about linearization i typically don't line your eyes in there because i can just have to propagate a point so i don't need the linearization so not needed at least for the pulses for the observations that may be different this depends on my implementation in terms of flexibility why because you can use any kind of distribution that gives me gain flexibility i agree what else yes assembling is also nice what else what else what is this thing i can do better than i could do before exactly so we have multiple data association hypotheses so i don't need to necessarily commit on one data association hypothesis i can i'm really flexible with respect to the data station so that gives me a really nice plus in terms of flexibility what about watch scale yeah so look log n simply pretty it's still pretty good i mean constant is better but it's still pretty good and it has some of the advantages with the linearization with the linearizing ones that's that's better here so what's the complexity of an iteration of my graph based approach everything sparse world is good so i built up my h matrix well what do i iterate it's always a sum in there for h for b it's linear in the number of edges so to iterate over all edges because these are the things i need to accumulate in my matrix and under the assumption that everything is sparse which is the case if i don't observe the whole world with my scanner it's actually linear in the number of edges it's called edges which somehow scales with the number of positions where the robot has been because they typically have constraints between them of course i may read the place i may add some more but in most of the cases this is some function of the number of nodes to some degree and of course i need to iterate but typically if i assume i iterate a little bit in every point in time that's just a constant factor let's say and operations every time i add a constraint right roughly linear in the number of edges um what about the underlying distribution what's the assumed distribution yes gauss but but no what did we add to that framework to leave the gaussian world we were able to deal with outliers plus outliers there's an important point for robustness we need to be able to deal with outliers so if a few constraints are really completely off we have really good ways for dealing with that linearization what about linearization in here don't make me cry it was one of the key advantages that every time we take our cost function we linearize it give me the solutions in every iteration really really in your eyes real linearization and this is a big plus because even when we started with a bad solution as we really in your eyes so when every iteration is closer we get to the hopefully right solution the better the linearization gets and that's one of the key big advantages if you have if you deal with somewhat on linear system uh non-linear systems it's really an important point which increases robustness especially for large scale mapping for the graph based approaches dramatically what in terms of flexibility yeah it gets two pluses for my side because you can add constraints you can have multimodal hypotheses you can take into account with a max mixture approach you can remove stuff of course the thing grows with the trajectory so that may be slightly bad depending but the even ways for kind of bringing that down you can optimize it a little bit to throw away some of the information that that's collected so that's really flexible what about large scale mapping why yeah so actually i would even say it's minus two plus plus it depends on what you're actually using if you do non-sparse full stuff all the time it could even be a minus um but if you use these hierarchical approaches and really invest some time and in how to solve the problem in an efficient way um you can actually do that in a very efficient manner and you can argue these are two plus or one plus but this is just a rough indication of what is these things typically look like okay um i'm more or less done the time is over i know that i don't like to know where do you see open issues everything solved what are open issues where do you see open issues yeah although you argue the system works actually quite well that you have a practical solution that works quite well so it's not the the graph optimization is actually not the bottleneck anymore i think we can confidently make this statement for most systems at least i mean you can see from a general point of view not about just this optimization but in general flam can we do everything okay you guys are too tired so just let's defuse him so the things which are still open issues are dynamic environments what if the environment changes so far we always made the assumption the environment is static um it's especially systematically changing environments if you're let's say in a factory and in the morning typically this world is in state a later in the state b later in state c there's a systematics or there's systematics in there and how to actually exploit that seasonal changes if you outdoors drive around with your camera a nice winter scene a challenge line looks dramatically different than in summer and be nice to actually recognize this if you want to do uh slam across multiple seasons um online solutions is also something which is yeah there are ways people pro there are quite some good online solutions but if you scale that to to larger and larger and larger environments at some point in time you may still can't optimize it every point in time um lifetime operation is related to the previous things is something which still is not fully solved and how to efficiently also find constraints in there because if you have a lot of observations finding constraints even becomes a big bottleneck resource constraint system if you do everything on a small helicopter which has whatever the computational power of your mobile phone you want to do everything on the mobile phone people are working on that there have been dramatic progress in this respect with this respect but they are still not the out-of-the-box solution you can buy may come up but currently it's not available and especially on resource-constrained systems like whatever flying vehicles humanoids there's still limitations on what you can do on those systems um what about failure recovery or um you're gonna go for zero user intervention the system should identify itself it made a mistake also something which is not really solved yet and also the question of how can we exploit prior information so if we have some knowledge about what the world looks like or typically looks like can we actually exploit that to build better maps and kind of not start everything from scratch with robot is completely the hard disk is empty it starts from scratch what can we do in order to exploit what other robots has learned sharing maps between robots or relying on previous knowledge that other systems accumulated or the system itself has accumulated beforehand can we exploit all that information to actually build better maps that's something which is also still somewhat an open issue okay at that point in time i would like to close this course i would just say it was again a big big pleasure to uh to teach this course you will be the last course actually was teach this course at least here in freiburg from my site as i'm going to leave fabric university basically day after tomorrow or something like that and only we'll come back for the exams but i'd have to say it was a great pleasure with you guys maybe someone else is teaching it next year i don't know but it was i really enjoyed it it was a nice course also enjoyed the interaction with you so thanks again and maybe we see each other in the future somewhere on a robotics conference or something like that that would be a nice pleasure thank you very much
|
SLAM_Course_2013
|
SLAM_Course_12_FastSLAM_201314_Cyrill_Stachniss.txt
|
okay so then we are going to start today so last week we introduced the particle filter or the basics of particle filtering so how to do recursive basian State estimation in the context of non-parametric distributions and we especially looked into the sample based representation the idea was to use a set of samples to represent the state space and the distribution over a possible States and we can see every sample as a PO hypothesis so every sample represents one possible State the system might be in and then the probability distribution is given um by a sum over the states of those samples together with weighing term and um a direct distribution in the state of these samples and um the key advantage of these sample-based representations is that we are very flexible with the type of distribution we can represent so we are not restricted to gaussian distributions for example as this was the case for the calman filter or extended calman filter but we can easily represent multimodal distributions in here and today uh I would like to look into the slam problem so how can we actually address the slam problem using particle filters so just is kind of a two- slide repetition um of the particle filter what we discussed last week so the particle filter is a nonparametric estimation technique that means I don't have um uh a parametric form like a gaussian distribution for example so distribution in parametric form which I use to represent my probability distribution but I use samples weighted samples and kind of the number of samples that fall into a region of the state space is proportional to the probability of that area that's kind of the key idea behind that um what we haven't discussed in detail here but what I at least said last time is that the particle filter is very well suited for doing State estimation in low dimensional spaces if we go to very high dimensional spaces that is more challenging why the reason is that I need a sufficient number of samples to cover let's say the areas of high likelihood or regions of high likelihood in my state space and if I have a high dimensional State space I need a huge number of samples and therefore these approaches work well in low dimensional spaces three four five six dimensions something along these lines but they do not work well in whatever 100 dimensional spaces that's kind of one of the problems um of the particle filter that it gets computationally so inefficient because I need such a huge number of samples um that um just simply does the approach simply doesn't scale to higher Dimensions at least not in its standard direct form the particle filter itself consists of three steps which we discussed the first one was the sampling step where I draw samples from my proposal distribution the Second Step was was an importance waiting step which was the result of the important sampling principle which tells me that if the proposal distribution is not the distribution I want to approximate and this is typically the case then I need to do a correction step so say and um compute an importance ways that accounts for the differences between the target distribution so the distribution I want to approximate and the proposal distribution the distribution from which I've drawn my samples and the last step was the resampling step which kind of redistributes the samples um and is there's a high likelihood that a sample with a low probability is replaced by a sample with a high probability so it's kind of this survival of the fittest like principle um in order to concentrate the samples in the likely areas of my state space if you looked at that in algorithmic form we had these three steps here the first one was I draw a new sample so this is the J sample at time T from my proposal distribution pi and for the um localization example we used the um odometry motion model to generate the next generation of particles so we say how did the particle the robot the pose of the robot evolve given that we executed a command that's what we use the proposal distribution for and that will also be the proposal distribution in the slam context as we will explore that today then the second step is the importance waiting step this was a direct result from the importance sampling principle which says that if I draw samples from an arbitrary proposal distribution but I want to approximate a target distribution I need to assign a weight to each sample and the weight is given by the Target divided by The Proposal both evaluated in the location of the drawn sample so um a sample where for example the target distribution has a very high likelihood and the proposal distribution very low likelihood this gets Amplified by getting a high weight this just accounts for the differences between the Target and the proposal and the last step was the resampling process with just um drawing samples with replacement so I draw sample I proportional to its likelihood um I do that with replacement I simply draw J samples and that's the resampling process and after resampling all the SES weights are set to 1 divided by n because then they are and this then the the locations of the samples are distributed according to the Target in the end that's what I wanted to have so this was the key steps of the particle filter which we discussed last time and what I use is the set of weighted samples um X and Omega in order to uh or W in um yeah X and W here in order to represent the state and the weight and this is my sample set and if we now think about the slam problem so we go move away from localization where the states space was an XY Theta location and we move towards to slam where we are want to estimate the position or trajectory of the robot and together with all Landmark locations or or the whole map then our state space moves from a three-dimensional state to a high dimensional State space so we have here the poses where the robot has been and oral Landmark locations so in the beginning in for localization we just had XY Theta so threedimensional state space and here now we have a much higher dimensional State space so as I said before the particle filter is an effective representation for low dimensional spaces but this turns to be not the case in our setting over here so this is high dimensional State space if I have let's say a million landmarks and the long trajectory of the robot I have easily 2 million and plus the length of trajectory dimensional State space which is orders of magnitudes larger than what a particle filter with in finite number of samples um or with a number of sample that I can actually represent in a computer system can actually handle so um there's no way of applying a standard particle filter sampling the overall State space that's kind of one of the key limitations why it actually took quite a while until people discovered a way not discovered a way but found a way um to use particle filter based um slam or build particle filter-based slam system and the key idea is let's have a look to our to the individual State variables that we have in our state space and let's see if we can actually use exploited dependency between them so if there a dependency between the poses and the individual Landmark locations that we might exploit what do you think if I ask this questions the answer is too obvious but then the next answer would be what is a possible depend we could exploit so we have the positions of the robot during data acquisition X1 to T and we have the locations of all M landmarks it's a high dimensional State space we want to cover this state Space by samples um but we cannot cover the full State space using a sampling technique so we have to do better so is there see any way of exploiting a dependency between those variables to do that in a more efficient way is there any dependency you know between these variables which one can assume you really think the poses of the robot and the landmarks in the environment are independent of each other the with the robots the each I strongly disagree with that otherwise the whole this whole lecture would not take place because then would be two separate estimation problems and simultaneous localization wouldn't be wouldn't exist so there must be something else yeah the RO course Sur depend to Ms the land M and land to each yeah um if you think about the electron grid maps that we had last week what dependency did we exploit in there okay anyway the elders so what was mapping the grit mapping approach that I presented the work of M and alas um what done in the context of grid mapping but that shouldn't bother us just consider that was a map why was this so easy to [Music] compute this is true we assumed Independence of the grid stce this is absolutely right but that's was not the point I'm want to nail down here at the moment there was another strong dependency that we exploited St yes but we talk about static Maps here as well so the key underlying assumptions of this mapping algorithm was once I know the posess of the robot mapping is easy right so if we know where the robot is during data acquisition mapping is easy so that's exactly what we learned given we know the trajectory where the robot was during data acquisition estimating the map is easy so if I know where the robot is at every point in time there is no post uncertainty of the system and it just can continuously measure the loc the landmarks and update their belief there's not this Mutual dependency between the pose of the robot that impact the landmarks and vice versa I can simply easily break that up so the key idea what we now want to do is to use a particle filter to only represent the poses of the robot then every sample is an individual trajectory estimate sample one says okay the robot went along here on this path sample two says okay the robot went here but it went a little bit further there and then turned around particle number three says okay I walk down for for a longer period of time and then turned left every of those samples assumes that it did the right job that was the key idea of the particle filter every sample says I did the right thing so as a result of that you can say okay one option is particle one did the right job if particle one did the right job the world will look like the trajectory of particle one applying mapping with known poses or particle two was right in estimating the current trajectory then I take particle number two and do mapping with known poses using the trajectory of particle number two and so on so as the result of that I have a distribution of a possible maps that are generated conditioned on the samples on the on the poses sorry so the key idea is now to use a pole filter to represent only the trajectories that the robot have taken and then for every sample build an individual map using mapping with known poses and say if the trajectory of particle one was the right trajectory then the world looks like this or particle 2 did the right job then this was a trajectory and then the map looks like the map that is generated using the trajectory of particle number two and so on so compared to localization where I kind of maintained a single um XY Theta location for every sample I now have a trajectory and to every trajectory you can see kind of there's attached a map and this map is the result of that trajectory and that's EX exactly what um the particle filter the so- called raized particle filter is doing and this technique is something which is called rization so and mathematically it's a very trivial factorization that's just kind of the definition of conditional uh the conditional probability that is exploited here so given that I want to represent um this distribution about let's say two varom variables A and B here and P of B given a so this part over here can be computed very efficiently so given I know a I can compute e a b easily if this is the case it can make sense to say okay let's represent only a with samples and then for every sample just compute P of B given a given that I take exactly the state um of a which the sample taught me so for what does it mean for us here for us it means a is the trajectory or the poses of the robot and B is the map so you say in instead of computing P of A and B so P of the robots poses and the map with with with particles I say okay let's take only P of a so the trajectory of the robot use samples to represent that and then for every sample compute P of B given a so the likel of the map given the trajectory um in inefficient manner that's kind of the key idea behind that as a result of that I need the samples only to cover the likely regions in a and not in a and b and especially if you now think about a is low dimensional and B is very high dimensional this is a great advantage and that's exactly the key idea what the raw black blast particle filter for mapping does so it starts out with the probability distribution about the poses and the map so these are now M Larks capital M landmarks given my observations and given my movements kind of the standard thing what now does it just factorizes um this this ter this belief into two terms namely P of x given these quantities here so the observations and the controls times P of M given X the observation and the controls so just split that up just applying the definition of the conditional probability to separate that term into so if here P of x given the the the the observation and the controls times P of the map given X the observations in theory have the controls in here but we can actually ignore them if we know the poses it's kind of so there two steps in here so UT should be U1 U1 to T could be in here but I connect like that because once I know the postes I'm not interested in the odometry commands anymore so we have now two posteriors the first one is path posterior which estimates what's the trajectory that the robot took and the second is a map posterior that depends on the possible paths that the robot has been taken okay okay good so the question is how can we compute this term efficiently and this is again at mapping with known poses okay so we discussed mapping with known poses in the context of grit map but not that much in the context of um we did in the beginning in the EKF um but um let's have a very very quick look how we could do that in order to to do that let's have a look to the graphical model that underlies this process so this is a graphical model you have seen in um probably the first lecture of this course so we have here the POS the the the post of the robot X which I want to estimate I have the landmarks down here the observations and the controls so what this says is read these errors at um influences so you say XT minus one influences XT as well as UT influences XT so this is exactly the the motion model which you sit which you can which sits in here it's p of XT given XT minus one and UT and these errors show kind of a direct dependence the same holds um the current pose has an impact on what I'm going to observe and also the location of the landmark has an impact on what I'm going to observe this is how you can read that and you can use these um these um graphical models to or in this case a base Network um to also Express conditional Independence or independence of variables okay if we now say um we know the poses of the system right that's what this in the so we want to estimate this guy over here we assume that we know all the poses so all these guys up here are known so all these variables are known and as a result of that you can't find any path from any Landmark to another landmark which doesn't go through this known variables so there's no way kind of to reach from M1 to go to M2 or from M2 to M3 or from M3 to M1 throughout passing through this known poses and if this is the case um these variables are independent of each other so what this results in this these landmarks are all independent of each other given we know given we know the postes of the system there not the official formal definition of of of um Independence in those um graphical models but for now um I would like to stick with this very simplified form so at least if all passes from one variable to another variable are blocked so to say by a known quantity then these variables are independent given these known quantities this is exactly the case here so there's no path from one Landmark to another landmark given I know the poses so for for this posterior over here I can say all the landmarks are independent of each other because I know where the system was at every point in time okay so Lark variables are kind of disconnected so they're independent given the robot's graph okay so I know that here these landmarks are independent of each other now given the poses so I can split that up into a product of probability distributions over the individual Landmark locations so this turns into a product over M landmarks P of Mi so that's not the individual Landmark given the poses and given the observations in which way is this a dramatic simplification to go from here to here why is it substantially easier to do it this way compared to that way yeah we can we can compute each Landmark separately exactly and if you think about um EKF for example to compute the location of a landmark that this means we have to maintain M tiny small by 2x two calman filters for every Landmark individually and this would mean we would have a large giant um 2m dimensional uh Co variance Matrix um using a 2m dimensional EKF so instead of using a um one high dimensional EKF we use a large number of low dimensional ekfs so and therefore this is substantially more efficient so all these guys here are now 2 by two Calon filters or ekfs every Landmark location every Lark can be easily estimated um on its own independently of the rest given I know the poses so every sample because every sample says I know where the robot was it's the Assumption of each sample because I know where I was I can compute every land mark individually so every sample can maintain M small tiny 2 X2 Calon filters and estimate the location of each landmark in very very effective manner okay to sum that up we have this posterior here which is which we want to use a particle filter and this is somehow similar to MCL to Monte car localization because we want to estimate trajectory of the robot given our observ ation is given over controls the only difference is we don't have a given map from the beginning on but the key trick is we can build up this map incrementally because we have our mapping process over here which is independent for each sample and so every of those samples which is used to represent this posterior gets a map Associated which is built individually for every um individual landmark that's kind of the key key inside here so is the basic principle in how we trying to to tackle the slam problem clear to everyone if not please ask me now because we be get a little bit more involved within the next one hour um so would like to make sure that you understood the concept up to here just one question yes sure start from same position at the beginning um yes typically so typically what you do is if you think about a practical application you you fire up your robot and you start kind of with a local reference frame every particle says okay I'm in the robots world reference frame at 0000 0 and then we start from this point on it's typically the way you do that if you have a different application that you directly want to start in a global reference frame um and you have an initial belief you can distribute the samples according to this initial belief that's basically up to you the standard po filter you say I don't have any exteral frame I just want to estimate what the map looks like then you start all particles start the same posst with 0 0 0 but if you want to do for example you want to start given an external given a GPS reference frame because you want to relate your the robot's trajectory directly to a map and maybe want to exploit this map partially during the mapping process and you have let's say an initial distribution about where the robot is you can use those samples to uh to represent the St belief you may need more samples in the beginning depending on how initial uncertainty is um but most implementations start at 0 0 but that's kind of an easy extension um if you want to use an initial uncertainty any further questions okay perfect so let's dive a little bit into the details and start with this guy over here how are we going to do that so we said we want to use a sample based representation to represent this first part over here so this guy should be represented with our samples so how do these individual Dimensions look like so we have a starting location x0 and this exactly refers to your question we typically start with 0 0 0 unless we have some reference to an external frame and then X1 is the pose of the robot X at time T X2 is the pose of the robot at time tals 2 always the XY Theta location however in this particle filter we never revise the paast trajectory so we never say okay at that point in time in the past I did a mistake though I correct all the samples that's something nothing which a particle filter does so it's always takes it a step forward and estimates the next trajectory of the next position of the robot so as a result of that in practice I don't actually need the past trajectory uh of the robot um because on on the Fly build a map on the fly um I'm only interested in the next pose so I can actually um I do not actually need to maintain the previous um positions where the robot was in my sample belief so as a result there's in the sample set I only represent kind of the current pose of the robot because the pass is not revised I could Theory let's say dump them Thum to dis in case I need it typically implementations do that because they want to draw the trajectory in the end but for the mapping process itself I don't need that for the estimation problem I'm facing here so in in the in practice every particle only needs to maintain three dimensions for the pose and then 2 times M dimensions for the landmarks um whereas every Landmark is a 2 by2 Common filter so that's exactly how the first efficient implementation of a soal ra blackiz particle filter ver slam looked like every particle has an XY Theta Dimension so current pose in X and Y and orientation Landmark 1 Lark 2 Lark 3 and these guys are my 2x2 carbon filters and uh Mike montmelo 2002 was the first one who actually came up with an with exactly this representation and with an efficient way for implementing that so it was kind of the first implementation of a raw black bized particle filter that um could handle large Maps or real world problems so kind of the first successful implementation and um parts of the slides go back to to Mike um in here or some of these animations especially this one over here um so this is an example so these are let's say our simplified world with only three samples in reality of course we have more um so particle number one two and three particle one says okay the robot is here particle 2 says the robot is here and particle 3 says the robot is here and these are the corresponding Landmark estimates Maps or maps of those individual samples and these are all these 2x two common filters so what happens the robot moves so this one said okay the robot moved over here this one said the robot moved over here this one the robot moved over here this is this new position is generated with the proposal distributions which is equivalent to the localization problem so I just use my odometry motion model to draw the next pose and the next step which needs to be done is the sensor update so I need to take into account my sensor information to update my belief to compute the importance waves how can I do that I just let's say this is your observation so two landmarks and I say okay this robot would observe one Landmark here one Landmark here this robot says okay one Landmark is here one Landmark is here and this says one Landmark is here one Landmark is here so what I need to do now is I need to take into account these observations excuse me and um turn them into important weights you can see this important weight we will derive it later on how how why is exactly what I will show you in a second um but you can this use this kind of if I talk quite informally about that is kind of an evaluation how well did the particle in estimating the C the map given the current sens observations so how well are the current sens observations in line with the map that this individual part I build so if you think about over here that actually looks pretty close so let's say under the assumption that this observation corresponds to this landmark and this observation corresponds to this Landmark that looks kind of reasonable here we are much further away from our estimates so this particle will get a high likelihood and this will get a probably quite low likelihood and this one this Landmark is this observations is pretty far away from that Landmark also will get a quite low likelihood so this exactly what happens can compute importance weights based on the maps of the samples the locations of the samples and the current observation we will go into the details how this weight update looks like and as a result you get um a weight update so every sample gets a weight and then the observation is used to actually update the map of each sample because it's a new observation and I now say okay the the the pose of the robot is here given the pose of the robot was here I can actually update my map is exactly what happens I do an small number of small tiny EKF updates for every Landmark individually and then obtain a new belief of the foret system yes please do I do that for every particle this update of the yes exactly isn't that pretty expensive and we have and we could say okay like in this example particle three is pretty away from uh from what we observed and so we just calculated for the first five particles biggest weight um so the first of all it's absolutely right you need to update the map of every individual sample so if you have n maps to update that's obviously up to end times more expensive than a single map that's true however if you compare to the EKF the great advantage that we have here is that we we have individual 2 X2 calman filters so this is kind of a constant time operation for every Landmark we typically observe just a small number of landmarks it's done very very efficiently compared to the EKF where we have to invert our huge m m bym dimensional Matrix which is actually more costly operation than this operation here of course it depends on number of samples if you have whatever trillions of samples the more samples you have the more costly this operation obviously gets but for reasonable sizes this is not a limiting factor at least for large Maps if you have a small map the EKF may be more efficient but if you have large maps with a large number of samples this makes a big difference but again just to stress this a substantial number of research activities in this area says okay how can we reduce number of samples in order to be more efficient so it's an absolutely valid question but from the utal complexity point of view we actually gain something if we do it in that way because um we don't we only have low dimensional ekfs and not one large High dimensional one and therefore I can typically easily compensate given that the number of samples is typically constant and number of landmarks is to be larger than the number of samples that I have yes please so do we have a threshold rate where we say okay uh this article is so unlikely now it and one no that's doesn't happen at all so there's no threshold which eliminates the sample um the resampling process kind of takes care of you for that because it draws samples proportional to its weight so if you have a sample which is very low weight there's a very small probability that is drawn and taken to the next into the next generation of samples so this comes automatically just by drawing with a pro a sample with a probability that is proportional to its weight but it's not that we need to maintain an explicit threshold to to eliminate samples as is not the case okay any further questions okay perfect so what we have now done in the first half an hour now is kind of giving a rough idea how the EKF works and everyone probably uh the fast slam works and everyone of you is probably saying okay I think I understood that um and that's actually that's perfect I hope that this is the case um we will dive now a little bit deeper into the math but it should not violate the picture that you generated in your the mental picture that you generated of fastem right now the only thing is what I will now do if we dive into the details I want to give you more detailed explanation why something is exactly as it is so for example this thing how is a weight exactly computed what we have here had here why is it weight like this so I would kind of after this more informal introduction to fast Lam I would not now go deeply into fast them and say this is the result this is a design choice and this design Choice leads to these and these computations which then result for example in this way of computing the importance weight and that's kind of my key idea that after this lecture you're actually able to implement such a system yourself that will be my my goal okay let's look to the key steps of fast slam and we look into so called what is fast slam one which is kind of the first obviously first generation of fast Lam there's a second generation of fast slam um but what I explained here is something which is known as fast slam one what it does it it draws samples according to the standard odometry model that we used in localization so it says it extends the path poster path posterior to the next point in time by just taking every sample and for every sample I draw the new generation according to the motion model that's exactly the same as in monard localization so this first step should be kind of clear anyone who needs a short recap from that how this works perfect good so we draw we take our old generation of samples and for every sample we draw a new pose Tak into account the odometry information this gives us a new sample the second step is to compute an importance weight and the importance weight um mathematically was is defined as a tar Target divided by the proposal and it tells us how do we how do we need to correct our belief as this guy over here the odometry is not the full distribution that I want to approximate so my target it's just a part of that because I want to take into account as well as the the observation and this is something which comes in here and so what are the individual elements here so this is a gaussian distribution and what it says it takes a current observation minus the expected observation or predicted observation and this predicted observation is computed for every particle as every particle has its own map every particle will come up with a different expected observation compared to the EKF we computed one expected observation I compute here an expected observation for every sample because every sample keeps its own map so for every sample I say how well does the obtained observation match with the expected observation and I have a measurement Co variance the measurement covariance is something which also used in the in the in the EF this takes into account the uncertainty of the landmark itself and the uncertainty of the observation it's kind of this part in the EKF which gets inverted this this part which I need to invert in the prediction in the correction step that's exactly what's called measurement covariance then the third step is just the belief update using the EKF update rule no black magic in here in the standard resampling operation okay so let's now dive into the detail if you want to implement that algorithm how does this look like start with my fast Lam one with non correspondences again that was something we assumed we have n samples here ranging from K = 1 to K = N so we we we iterate overall samples what do we do okay let's say um we have our um individual samples so every sample has a pose and a landmark estimate therefore it's kind of a list of landmarks and and an importantance weight okay and we have that for every sample so for every sample we have the current POS estimate the weight and a list of 2x2 Caron filters given a mean and aarian which represents the postes of the individual landmarks this is what this guy should Express here so the first thing we do we say Okay given the current pose at time T minus one of that particle and the odometry command UT I sample a new um a new post for that sample standard odometry motion model sampling as in Mont Cara localization okay first part was easy this is a prediction step now let's go into the correction step okay we assume J is kind of the feature that we observe in our current observation this is the state Association variable which tells us just the the current measurement that we take is actually particle uh Landmark J of that particle same as in dkf okay again as in the EKF I have to distinguish two situations first The Landmark has never been seen before so I need to initialize it or it had been seen already in the past so it start with has never been seen what I do is I compute its mean with my the inverse of my observation function exactly in the same way as this was done in the EKF so if I've never seen Landmark I use the first observation to initialize it I compute Jacobian I compute the um the its initial uncertainty taking into into account how certain am I about my observation and uh the Jacobian and I have a default importance weight standard value because otherwise would be undefined because um so this is a new Landmark for every uh this is for this sample it's course regarded as a new Landmark so just initialize the landmark this was kind of just initialization case the more interesting case is actually when I know this Landmark when to update my Landmark this part comes over here so I have written it in in a short form which just says okay it's Landmark J that we observed so just execute the EKF update for that step I don't need a prediction step I only need the update step because the prediction step of the sample is done with the particles and just the update step for the map is done with the EKF update and then I compute my importance weight as I said before with the expected observation and the measurement Co variance the measurement covariance takes into account the uncertainty that the landmark had in the previous step and the measurement noise so how certain have I been about the estimate in the map and plus the uncertainty of my observation it's very similar to this term which was used to compute the cman gain except that this guy got inverted in the end okay um and for all feature that we have not observed I just don't change them so just say okay for every Landmark which I have not observed they take exactly the same um belief as it was before the case before okay I itate over all samples at the end I do a resampling step that's it yes please um for all unobserved does that mean for All unobserved in This step or for all unobserved for all unobserved in this step so if I don't observe a landmark I takes a previous estimate so it's kind of there's no this is no black magic in practice you would actually implement it in a way that you update the landmarks in place um and then you don't need to do any copy operation like this it's just kind of for um writing it in a nice in an hopefully easier to understand way so there are three things either you have never observed the landmarks you initialize it or You observe it or the landmarks are not observed three cases which which can occur and these are handled in this algorithm so in the next slide oh yes please another question yeah sure yeah you me the weight you meas the weight based on the location of the for new location of landmarks because you update them um measure the weight because after updating the landmarks now the weight of the very very good observation it's written on the slide how this is done if you look how Q is computed Q takes the uncertainty of the previous point in time so you do that before the update um so you can because what you do here is you say the old up the old estimate leads the new estimate with with time step T here but he isuse T minus one so it's very good observation you need to First compute the weight and then update the map very good observation on the next slide I have the same algorithm here the only thing which I did I expanded the EKF update just put that here for completeness so if you because this is one of the homework will dive deeper into that implementation at least everything at hand so this block over here the red block is exactly this EKF update equ equation which is over here and this is just a copy paste operation from the EKF so what we do we compute our predicted observation the only difference is we now do it for every particle and here you exactly see it's done at a time step T minus one what you were referring to so not at T T minus one I compute my Jacobian I compute the measurement Co variance I compute my common gain and here you see exactly that the common gain take this measurement Co variance inverted into account so this was the block that I invert in the EKF then I update my mean estimate exactly as a weighted sum um of the prediction and the kind of Correction and I compute the update the co variance of that Landmark so just a standard copy paste from the um from the EKF algorithm okay so we have now basically explained all the important things the only thing is why the hell is this the importantance factor that's something which I haven't told you at all right now you accepted all of that you all accepted that but I haven't derived that and that's something which we will do very very soon but before I do that I just want to show you an example how that works so what you see here is a simulated environment it's video by Mike montoo um all the blue dots are the ground TR location of the landmarks the blue trajectory that you will see evolve is the true trajectory that the vehicle has taken um red is the estimate I think the dash line is odometry um okay and this this part here is kind of the visibility range of the sensor so all those landmarks is kind of after the first step okay come on no mouse cursor is gone here it is okay so you see how the so the red and the blue they are both more or less on top of each other sometimes you see a small deviation this is kind of the difference between ground truth and the estimate you always see the estimate of the best particle which is shown here um and the dash line here isometry this kind of just be a small illustration to give you an idea how this EKF works and you see something this these red dots which flick around these are the observation that the system took um yeah like like these guys over here these were the observations in this case for this landmark and for this Landmark over here what was the Das uh the odometry if you would just take into account just rerun it um if you would just um use the odometry information um that will be the estimate that the system would be so you can see here it it deviates from the true traj um yeah okay so now let's look into the last thing which I promised you why is this ter which I showed you actually the importance weight the importance we as I said before is a direct result of the important sampling principle so the important sampling principle says I can draw samples from an arbitrary distribution basically arbitrary some small constraints but for us let's say arbitrary um and then although I want to represent a different distribution and then I need to correct each sample by assigning it a weight and the weight is given by the ratio between the target distribution so the distribution I want to approximate and the proposal distribution which is the distribution from which I've drawn my samples this is exactly what is shown over here what I now want to do is I want to start from this equation and tell you I have a certain Target distrib which I want to approximate I used a certain proposal distribution we'll do some derivations and then in the end this equation which I have shown should evolve from that okay so my target distribution is the trajectory the the path of the robot given the U observations and given the control commands right that's what I wanted to approximate The Proposal distribution takes into account all the information except the last last observation because I don't take into account my um sensor observation when I draw the samples I only use odometry information and I say the proposal is used by a step by-step process so I have some kind of recursive belief which I use and I evolve the samples according to this equation so I said I take the um the the the belief at the previous point in time this is kind of this recursive part here I take the odometry and prop get that forward so I can write my proposal in this way is that clear so is it everyone clear that this is the target distribution that's what we wanted to represent with our samples The Proposal is this guy over here which says and it's kind of split up into two parts this is the direct odometry motion model so kind of one step ahead and this stuff is this is kind of what comes from my previous belief right so this is kind of the part which is equivalent to this term up here except that all the time ands are minus one so that's kind of what my previous belief was so I want to take my previous belief and take it a step further okay so let's simply go like that we say my weight is my target divided by The Proposal distribution so just put in these terms that was my target that was my proposal I have this index K over here because I evaluate my Target and my proposal except in the location which was drawn for which the sample was drawn okay so I have this equation over here what I continue to do with that I take this equation over here this part over here on top and just apply base rule so I say okay I go this I apply base Rule and do a factorization so I end up with this term over here so it gets a little bit messy but actually it's everything is pretty straightforward if you know b r so what happened over here we moved z t to the other side so z t moved over here and all the poses moved over over here so we have P of that t given all the poses times the um what is this the pus that should be ah no um times the all the poses the only thing I did here just factorized it and all the poses can be factorized in the current pose given the previous pose and the belief about all previous poses so it's like um P of X1 to T is p of XT T given XT -1 * P of X1 to tus one right this is just standard factorization and here using the mark of assumption otherwise I would have 1 to T minus 1 here this is just standard factorization and this is exactly what is done over here so this term the second part of the the base rule was um all pules from X1 to T which is here split up in XT and all the previous poses and I have my normalizing term of course over here from base rule that's it what happened so far and now you can see we can easily simplify this these two guys are exactly the same right and these two guys are exactly the same so perfect we can get rid of all of them and so the only thing which remains is my normalization constant which makes needs to make sure that the weights of all samples sum up to one and this term over here so the laih of the current observation given all the poses and the past observations okay okay let's take this term and take it a step further what I can do in here is I can integrate over the um L the position of the observed lark Mar like a the marginalization step rule from which this so I introduce the uh position of the landmark J so the landmark that I observed I get this term over here so here MJ here sits MJ as given and then MJ uh given all the rest right okay now I can look to this to these terms said okay given I know the current pose of the system and given I know what The Landmark is I can get rid of all previous poses and all previous observations so it's exactly the same assumption we did in um whatever modal localization in all our models is given we know the state so given we know where the landmark is and given we know where the robot is um we can compute the likelihood of an observation so this term simplifies to exctly this oops equation over here um and the second term remains as it is the only difference is here I get get rid of the XT because I say if I only have observations up to T minus one I don't care about whether robot work the future in order to estimate the location of a landmark so just if I know with the robot in the future doesn't help me to estimate the position of the lightmark also an step we have done in the past quite often okay so what do we have in here what we have in here what is this which problem is this one estimating the location of a landmark single Landmark given our observations and given the uh poses mapping with known poses so this is our 2x2 EKF which we can use to compute that so it's a gaussian which we know how to estimate what we have here is the likelihood of an observation given we know the poses and given we know the landmark location again we go back to the EKF how we done that the EKF we know that's all quite easy and under the assumption that this is kind of GA that we use a gaussian model this is the LA of the current observation given the predicted observation and the uncertainty of that observation exactly the same as in the in the EKF right where we used this to kind of trade off for computing the common gain and the second thing what is done here is this just the estimate of the location of the landmark given um the uh the terms from our what we computed with the EKF our EKF estimate and as a result of that I can combine that as this as the measurement covariance Matrix Q which is given by the um the the previous estimate this is just the jaob of the observation plus the measurement noise so okay this is the uncertainty of this belief up here this results in a gan distribution where this Q is exactly this measurement Co variance and here the predicted observation the only thing I have in here says this is an approximation it's not exact and the reason why this not exact is um because there's a linearization involved from so we again have to linearize my observation function if I do it in that way similar to the EKF so as a result the importance way it can be computed by evaluating the gaussian distribution and comparing what's the difference between the current observation and the predicted observation given the expected uncertainty of that um of that belief which is given by how certain am I about the location of the landmark and plus the uncertainty of my sensor so this perfectly fits with the intuition I said before in the said in the beginning today I say the we tells me how consistent is the map that the particle buildt with what the robot currently sees this is exactly what's written in here this is what the robot currently sees this is what the particle estimated and in here is also is encoded the sensor noise and the uncertainty of the estimate of that particle so the we tells me how consistent is the world represent entation that the individual sample generated with what the robot physically perceives and the closer the expected observation is with the observation the system obtained in reality the higher the weight so the more the better the the model that this particle built and the better the pose estimate of that particle in turn okay and this is the reason why in this algorithm we have this line over here on how to compute the weight that's it that was all the black magic involved in fast slam so to summarize the key design decisions have been we take the proposal as a proposal distribution my odometry motion model as a result of that this sign Joys I need to compute the weights according to this distribution and I as the landmarks are independent of each other given I know where the system was during the mapping process I can use small tiny 2x two common filters or extended common filters to represent the location of the individual features for every single particle and this is this EK update step which which I then do and this is kind of the whole Magic of basic fast fast 1.0 is there anything any question I can answer you right now from that okay maybe you get of the inte I didn't so what you see in here is um it's a mixture between two terms so you have a it's in the end of it's a convolution of a gaussian distribution where you have an estimate and you have the uncertainty of the um of the observation and this is a this can be this is a convolution of two G of the gaussian distribution with the gaussian which is gaussian again and that's the way how this how this results in that so that's kind of the reason why you can show that under the assumption that these two guys are gaussian the result is a gaussian again but very valid question okay so now let's look into one thing which we ignored so far and but this is or for the com for the EKF at least we said we assumed we have this data Association so we assumed we know the data Association this C correspondence variable C was given so we knew which observation corresponds to Wi land mark how do I actually do that in practice and um if you see this example over here let's say this is where the robot thinks it is these are three landmarks in the environment these are the observations so the robot could say okay either this guy goes here or goes so corresponds to this Landmark or to this landmark and the same here and kind of if this one should be this Landmark is quite likely that this one will be this Landmark but in theory you have all possible assignments of land of observations to landmarks and um the problem is there's a large number of possible data associations and the number of possible data associations depends on the pose of the robot so if I would know that the robot was sitting let's say over here little bit rotated to the right it's quite likely that the kind of red Association would be the most like oh at least the most likely one if the robot was sitting a little bit more to the left rotated to the left the blue one is likely to be the correct one so the pose of the robot influences the data Association and this is now a great great Advantage for the for the fast slim algorithm and the reason is that I can do my data Association on a per particle basis every particle can make a different data Association and if a particle does good data associations it's more likely to survive the resampling steps compared to a particle which do does bad data associations so the big Advantage is now if if the robot is sitting over here and observe three landmarks it's quite likely that this is the data Association that it does that it that it did if the particle sits down here it's quite likely that this is a data Association and particles here in the middle would get a lower weight because it don't fit to any of those constellations very well and this is one of those situations where you quickly get a multimodal belief so it say robot went either here or here because both observations fit very well with with the pattern of landmarks that I see so I will end up in multimodal beliefs so not this kind of nice banana shape orian distributions as seen over here and um um and this is one of the advantages of the particle filter that they can do the data Association on a per particle basis another example is this example over here so the robot sits over here these are two landmarks in my map and that's the observation that the robot gets and the red or the brown Landmark what I can do now I can just compute the Mal distance of the expected observation with respect to these those two landmarks over here and this may result in the in the case that okay with probability 3 it's the red one with Point probability 7 it's the brown one what should I do the standard approach said okay just take the most likely one but what I can do now here in this situation I can actually sample it just say okay I don't know which it one it is I just draw with um a one Landmark with a pro with a probability that is actually given by the individual probabilities so if I would sample here 10 times seven times I would draw the brown brown data Association three times I would draw the Red Data Association and if I have a large number of samples I can just sample those data sociations so some samples will draw the right data assciation some samples will draw the wrong data sociation and the result they will come up with different maps and at least the hope is that in the long run those which did the right data Association survive and those which did the wrong data Association die out of course this process may take some time I'm I need to have a sufficient number of samples to cover all the different possibilities but the poal filter has this possibilities so um yeah there as I said multiple options one is pick the most probable match that was the standard EKF would do um but again I can pick a random data Association and um weight so weighted with the probability that is the right or wrong data station exactly what I explained before and what what I typically have I have a threshold if there's no Landmark which has a high likelihood of being a station I assume it to be a new one that's but again that's an assumption so the nice thing in here is that this is a very easy strategy it's very simple to implement but has very nice properties and this is one of the advantage of the uh for fast compar compared to EKF that I can do very simple but effective data associations and the reason for this is that I don't have um a parametric form which describes the uncertainty of the robot I have this sampled belief in this sampled belief you can say okay for this sample this is the right data Association for this sample this is the right data Association and this sample representation makes it very very easy for me to make this Advanced Data Association having multiple data Association possibilities in my belief this is something which naturally evolves from Fast slam and the advantage here is the sampled belief compared to a um whatever gin distribution or other Prometric form that I have and this is kind of one um situation where this sample belief is really really advantageous and offers me a nice possibility to do that okay let's see how this EK this fastl actually um compares to um this Victoria Park data set you have seen that there just one example where you can then use the GPS information to analyze the error again you see it's not a perfect estimate but it's typically better um than what the EKF provides also a small video where you can again see that again it's a video of Mike provided by Mike um so you can see the robot driving around here these red dots are the observations Al the uncertainty of the observations in the model blue is in this case the odometry so it's not the ground truth information so a little bit inconsistent in the um in the coloring for the video sorry for that and so the red uh trajectory here is the trajectory of the fastl algorithm at least you can see that the using odometry would give you a pretty inconsistent estimate Le something you should see from this plot over here okay that goes on for quite a while um one thing one should analyze is which impact does the sample size have on this algorithm because obviously if you just take let's say a single sample you're quite likely to perform poorly how does it change with 10 samples how does it change with 100 samples so typically the higher the number of samples the better is your estimate um and what you what you can see over here is if you if everything is perfect so this example um this was simulated data set everything was perfectly gaussian um then it's just that you the performance of the EKF in terms of quality of the results um is actually pretty good and this also assumes that you have a low motion noise at least you have a roughly linear function for your motion noise if you now increase the motion noise of this algorithm the results look substantially different then the quality the the the the performance of the EKF dramatically decreases because the nonlinearity in the motion kind of kicks in combined with the with a large uncertainty gives you bad estimates of the posst estimate and so the here dramatically increase why it basically not noticeably affects the fast algorithm so it's an interesting algorithm which has some very nice properties so try to get rid of the EKF to model The Joint posterior just says okay I split that up with one part especially the one which also the the the motion is is um doesn't well is represented by linear function I use a particle B representation so I can multimodal Bel the robot is and then given these this multimodal sampled belief I compute a distribution over Maps where I said every particle has its own map and this way I can also take into account different data associations on a per particle basis which is really an advantage if you don't have a good technique to get let's say a good feature descriptor which always gives you very good data associations um as I said it model the robots pass by sampling so by by samples um it computes the map given the pass um and we can do a per particle based data Association and as a result of that and this is the reason why the data station is simpler in this case is that you can do a you don't have an distribution so for every data Association decision that you do you don't have a distribution about the robot's pose but you have a a concrete instance the robot was here that was the the sampled belief and this makes it easier of course you have to do m n data associations for every particle but it's much easier to do n simple data associations than one complex one where you have to reason or integrate over um possible data associations given the uncertainty of the vehicle itself okay next thing I would like to do is look into the computational complexity of that algorithm so let's let's let's have a look and analyze this algorithm what do you think is the computational complexity of standard fast slam what about integrating the odometry information how complex is that terms of L of number number of landmarks and number of particles we have to for each exactly so it should be linear the number exactly so it's quite like to be linear in the number of particles absolutely right what about the Larks does it depend on Larks the AET does not up perfect so there's no dependency on the landmarks the motion motion step so it should be just linear in the number of samples um what about integrating the observation into my belief why n * m number parle okay so observe Ober and then okay so the number of particles times the number of observed land marks if we assume we do every Landmark separately it's absolutely right we have the number of uh in the number of particles of course every particle need to update I have the data station so I just basically look up in an array of this is land mark number 10 so retrieve landmark number 10 and update doesn't depend on how many landmarks I've stored there if I Implement in that way resampling how expensive is resampling so how expensive was this low variance resampling which I explained last time it was linear in the number of samples but but there's a big but so the resampling procedure itself is linear in the number of samples but in the worst case I need to copy the samples duplicate a sample and duplicating a sample means copying the map and so this is the resampling step now turns out to be the most important uh most uh computationally most demanding operation in the standard simple partical filter-based implementation because it's the number of landmar um particles times the number of landmarks because maybe if all particles have a weight close to zero and one has a particle a weight close to one it's quite likely that this particle will be copied n times and then I have an n * m computational complexity so this is kind of the computational complexity for every iteration for a simple implementation of fastl still it's just linear in the number of landmarks which actually quite remarkable if you compare to the EKF but we can do better we can do better by using by not by not changing the algorithm but but just by using a smarter data structure instead of saying okay every particle maintains its um an array let's say where it stores all the Or List which stalls an array um which which stalls all the all the landmarks I take a smart tree structure a binary search tree where particles can share landmarks as landmarks are all independent of each other if I update some landmarks in this part of the state space I typically don't change the landmark somewhere else not as the EKF an update here can change the uncertainty of all L marks everywhere there something which I don't have here because I have this conditional Independence given the the poses of the robot so given the sample so what I do can do is I compute um I take a search tree it's not a simple search tree and so just say okay I'm looking for let's say the L Mark number three so take it through tree okay is it smaller than four yes I go here is it smaller than two no I go here is it smaller than three or smaller equal three yes I found my landmark number three here it's just a binary search tree no black magic everyone of you should have attend heard that in introduction to computer science one um or algorithms and data structures so and this has a complexity which is um logarithmic in the number of landmarks that I have so I can access a a landmark and a logarithmic comp lexity what I now can do is can say okay now let's do the exactly the same thing let's particle share those estimates down here and then the tree gets a little bit more complex so say I have my old particle so this was at the previous point in time and have a new particle which comes let's say a particle is duplicated from the resembling process get it's a new particle then I can build for the old particle I can build have have my standard tree as it was the case and for my new particle it references into the old tree and whenever something is updated just this updated path is is replicated and then stored locally so only if I update an a landmark I just this is a constant number of operations I do every point in time I need to kind of store here now two different estimates of landmark number three but all the rest is shared this requires some bookkeeping to do that what has been updated to which um particle does this um this estimate kind of belong to but you can in this way can do that that you that that copying can be done very efficiently and then the only thing is that retrieving a landmark it's a complexity which is logarithmic in the number of in the number of landmarks and not constant anymore as a result of that the this step integrating an observation gets more expensive gets the factor log n log M sorry log M but the resembling gets cheaper and so the overall complexity is cheaper it's the number of landmarks of particles that I have times the logarithm of the number of um of landmarks that I need to maintain this is of course the computational complexity memory complexity is larger because I still need to of course maintain all landmarks you had a question the same line yes so this is definitely copy paste error when when making this figure so this should be five and this should be seven absolutely right five and seven um y okay let's look to the um the the comp the the memory usage of the of this algorithm and so the memory usage is dramatically reduced for um using this smart tree structure because you don't need to replicate the the the the number of the landmark estimates for all um for all particles you can nicely see that that kind of the log n complexity is here along these lines and the linear fast slam um which copies for every particle the number of landmarks has um a complexity which grows linearly with the number of samples and kind of how steep this increase is just only depends on number of particles you have because number of particles is constant and so if you have more particles it would have a steeper increase here but it will be still linear okay so that was kind of fast Lam one now you know everything about fast Lam one and I just want to use now next two to 3 minutes to explain you what fast L 2 could be um or at least tell you what the improvements will be because um then in the next lecture we look into how to build um maps with kind of with grid maps with these particle filters and then we actually look into kind of fast l two for grit Maps because for git Maps this fastem one doesn't really work for some reasons we will explore next time but for landmark Maps it it's operational it works and but fast St two for grid Maps is then the solution then therefore I said okay I don't go into fast St two for uh for landmarks um just do fast them one for landmarks and we do fast stem two for grit Maps next week um but at least I would like to give an idea how can how we can do better so and what fast Lam one does it was says okay we take as our proposal distribution our odometry motion model and the question is is there a better way to do that obviously if I asked such a question there is a better way to do that um how could you imagine a better way of doing that how can you do samples better better means that you draw the samples in only in the kind of high areas of high likelihood or regions of high likelihood of your estate space a Technique we discussed last week which may help to inspire you in doing that the end of the grip mapping chapter scan matching maybe okay so the the idea was to if we have even accurate sensor information just by aligning sensor scans like laser range scans we actually get much better POS estimate than comparing just using the odometry information so um what Fast 2 does it also con considers the measurements in The Proposal distribution this kind of breaks a little bit with this clear correspondence from this particle filter implementation to the standard base filter where you say we only take the control for the prediction step and only take the observations for the correction step so this kind of mixes those two steps a little bit more to get a little bit more together because um it says okay we also take into account our observations for already sampling our particles so we kind of we comput a better proposal in better means here a distribution which is closer to my target distribution so more samples will end up in the likely areas of the state space and much fewer samples will end up in low likelihood areas and this is kind of the the key step to bring this algorithm to the next level of performance so what fastem 2 does it draws from a proposal distribution which also has all the odometry all the sensor information in here right and this leads to be in proposal distribution which are much more peaked around the true state where the system is in not these kind of flat large distributions um so as a result of that I need less samples therefore it can be computationally advantageous on the other hand I have to say that um the computations for this proposal are more involving and then you may argue that the gain um uh that you get by reducing the number of sample at least in computation time you need Maybe gone but you get much more Peak distributions and this leads to a like higher likelihood that your filter will actually will not diverge and get good estimates especially in the long run um that's gets more complicated something which is known as fast stem 2 which we don't go into the details here for feature-based um fast Lam but we do that next week in the context of grid based fast L but we will look into this problem again just as a small comparison how does how is the performance between fast LM one and fast LM two so it's just a small example robot was driving around here this is fast m one fast m 2 you see H looks actually not too different the first round but what you can see is here um that these particles have a much longer history so I have a much much larger diversity of trajectory estimates in the past whereas the um fast Lam one down here have all particles have basically the same estimate only from this point in time on they kind of spread out a little bit and this is the reason that the the frequent resampling steps eliminate particles and if I continue eliminating particles always a substantial number in every step I kind of eliminate different trajectory hypotheses in the past because all the particles are kind of replaced and then kind of information about the past die out as a result of that this fast St 2 has has a higher diversity of estimates because they all are good estimates here we just kind of converge to one mode so to say this one is more likely to maintain multiple modes and this is substantially better when you do when you close large Loops or because we know because then multiple hypothesis about how the trajectory looks like in the past make a big difference but maybe just kind of this is just as a teaser we will dive into this into next week and then you will let's say get a bit more knowledge about what's going on why this an important property of these algorithms so to sum up with with with fast Lam what what what is fast Lam typically referred to fast Lam is particle filter based slam that uses this trick of separating the estimation of the trajectory from the estimation of the map by saying we use particles only to estimate the trajectory and then for every of those trajectory estimates so for every particle we build a know map this is something which is called rization therefore it's kind of raized particle filter for slam and using this with landmarks um is Mike MTO coin this term fast slam um and there's fast slam two and fastem one and the main difference is between fastem one and fastem 2 that fastem 2 uses in so-called improved proposal distribution so a better proposal distribution which takes into account also the most recent sensor observation in order to have a more Peak distribution around the areas of high likelihood in terms of computational complexity FM is a pretty algorithm in the sense that is logarithmic in the number of landmarks that I have um and um linear in the number of samples and the number of samples is typically constant say could be 1,000 could be 2,000 but they they not typically do not grow excessively okay um fast have been shown to actually work quite convincingly so it scales very well to millions of landmarks so there have been data sets with at least a million of landmarks using fast Lam in order to solve this problem which gets pretty pretty tricky for for EKF because you would have a 2 by two million uh Lark um Co variance Matrix you need to invert matrices of these size it can be can be ugly um no you don't need to I was wrong statements you don't need to invert them but you need to um you need to multiply those matrices so you have at least a quadratic complexity that you need to maintain for the for the EKF um so and here we only have a logarithmic comp lexity so this is substantially better logarithmic is substantially better than quadratic um if you compare both uh the other advantages of the fast algorithm is that you have quite some flexibility in doing data Association you can do per particle data associations and this is kind of um a really really nice property and makes it solving the data cation problem um easier and it has a further Advantage which I haven't discussed in detail here um but the the reason or the linearization of the motion model um where the EKF suffers from is not a problem for the for fast L because you just propagate a point through nonlinear function namely the pose of the prodical of course you still need to sample around that but the um you're you're it's less critical um to um to to have if you have nonlinear motion models because you can propagate them in a much better uh way you you do not necessarily generate a gaussian uh estimat so you of course you need to have a way for sampling from these distributions but um you're not restricted to uh linearization and gaussian approach if you want to know more about uh fast Lam again I recommend the probalistic robotics book in this case see the Arata page because the the printing of the book has still a couple of Errors um I hope to have fixed all the errors um on the slides here um that's kind of probably the the reference which is easiest to grab here I kind have two papers by Mike monmo the 2002 and 2003 paper where he introduced fastl one one with known data Association the second one with unknown data Association uh which are really nice um Works um I guess the first one is easier to Gras easier to understand so I would recommend looking into this one um on the other hand those you find for free on the web um if you don't have the book you don't want to buy the book and you don't want to read in the library um you may also have a look to these two works that's it from my side for today are there any questions okay good so we close features for now was kind of last at least for the for some time the last Landmark based approach that that you have seen and we look into the grid based um mapping techniques using raw blackiz particle filters next week before Christmas and then also look into kind of this fast St two or kind of fast l two ideas but apply to grid maps and then we'll have have a long Christmas break and then next year we will look into the graph based approaches of the simultaneous localization mapping problem that's it from my side for today I if there know any questions then thank you very much and we see each others next week byebye
|
SLAM_Course_2013
|
SLAM_Course_20_SLAM_Frontends_201314_Cyrill_Stachniss.txt
|
okay so welcome to the last lecture of this course here in this winter term and what we discussed so far in the course were mainly the so-called backends or optimization engines or probablistic estimation techniques that were running kind of in the background solving the same problem and today I would like to give a very very of course brief short overview about front ends and one important aspect inside successful front ends on how to determine if a constraint is likely to be a correct one so we are still interested in avoiding to add roam constraints although we've learned that we there are techniques which can deal with outliers in the data Association so if we have from constraints they are technique that are able to deal with that but of course we are still interested on adding as few false positives as possible and so this is related to the front end and what my goal is for today I will introduce three kind of small front end systems on a very abstract level just giving you the idea on how they work with different sensors and then in the second part of this talk today I would like to stress on what kind of conditions should such a constraint fulfill or the the metrics of the local environment fulfill in order to let's say I would say ensure to be out layer free but to reduce the probability that this is actually an outlier mesh so we're talking about front ends today and what we learned so far we have our graph so you've seen this picture a few times during the course here that moved through the environment we have our nodes and we have constraints between those nodes and we have loop clothings when the robot observes the same part of the environment and so far we always assume these constraints are given so these errors here we assume to be given and today we would like to look at the case how do actually generate those arrows so these these errors here these constraints which are successive constraint that should be kind of clear that's can be simply odometry information or the result of an incremental scan nature so it's something that you have seen already here in the course that you know how that works but the important ones actually those loop closing constraints so or even if there would revisits the same place kind of look constraints which you can it say informally called localization constraints and the robot moved in an existing part or in a part of the environment it has seen so far and adds new constraints between its current observations and the observations that the robot obtained in the past so I also seen this figure already in the past this is the two main components of the slam system so our back end and the front end so far we've mainly looked here into the graph optimization so the raw data comes in the graph is constructed the constraints are generated the graph is given to the back end that back end optimize the graph and returns a new node positions back to the front end and then the front end uses this information together with the new centre information to generate new constraints oh and so far we we have been looking to this part and today we would like to look into this second part over here so that's that's the goal so how do we get those constraints we get those constraints to begin by matching observations so we have different observations depending on what platform that can be whatever stereo camera that can be a laser rangefinder or different types of sensory modalities and for every sensor of course there's a different way of obtaining those constraints and those constraints may take into account what the sensor actually sees how kind of unique is the data that the sensor generates for specific area or it's a look all corridors exactly the same if you only rely on laser range data and the corridors all look exactly the same from the Rays range data point of view then we may take that differently into account as if we have a corridor where we have a lot of whatever unique markers glued to the to the walls and with the camera and can perfectly observe those markers so depending on what we see what we assumptions we make about our observations this tarz can be very hard or not that hard and what typical approaches are is if we go for laser range data is then scan matching so we really take our raw laser range information and try to align those different laser scans that we obtained at different points in time and if we can match them well I'm gonna say okay this part of the environment I see at the moment looks very similar to the part of the environment that I have seen so far in the past and therefore it is quite likely that we are at the same place other approaches use features for example we had those the Victoria Park where trees have been extracted in this case from the laser range data so the trends of trees and every trunk of tree was seen as one feature or one landmark and the robot map those landmarks by checking for say fitting circles into the laser range data and whenever it found a good circle say that's likely to be a chunk of a tree and use it as a landmark the third class of approaches uses feature descriptors most popular ones are for example sift and surf these are two features which you can extract from image data and which kind of describe the local surrounding of the place where this descriptor is computed and if you have a lot of those descriptors in an image you have a typically quite good description of the image so you can just match those descriptors and based on those descriptors identify if two images are recorded from the same place of course this is not free of errors and free of flaws but you can match surprisingly large databases of images using those descriptors so these are three popular techniques or sensor information that front-ends use in order to make the data Association and way we actually look into those who is very short example examples during this course today okay so you're typically in the situation we say okay the robot is currently let's say at location a and let's assume the blue circle over here is a sensor range of what the robot sees and we know in our say graph a post graph that we have built so far we have two other places b1 and b2 so these are places where the robot has been in the past and I can estimate where those locations b1 and b2 are relative to the current pose of the robot which is here indicated with a so you can say okay given I'm here at the moment and that's my sensor range I can compute where are those other pulses so in this case B 1 and B 2 just two examples could be more obviously and then I can also estimate what is the uncertainty of those poses B 1 and B 2 relative to a do that by eliminating the note a from my linear system and then inverting the resulting Hessian and looking to the main diagonal blocks this gives me the uncertainty here indicated by these dashed lines where b1 is relative to a and the same here where is b2 relative to a so based on this information I know I can never have an estimate of given my current pose where's b1 where's b2 together with The Associated uncertainties certainty estimates okay what I then do I kind of can extend the the uncertainty ellipse by kind of the visibility area of my scanner and say okay say if the robot was sitting standing over here that's the area it may have observed then I can simply check is the current sensor range is there an overlap between the current sends a range and the possible observation that I obtained from b1 on b2 and if I found places for which this holds and this is the case for b1 because let's say if the robot in reality was sitting here and say these are as a two sigma bond then you can say with 95% probability p1 sits in this ellipse and they say it was sitting here this was a sensor range so there's an overlap in the sensor range I may see the same feature the same feature descriptors or whatever I'm looking into and I can argue that I may need to check if a and B are seeing the same part of the environment in order to look for the closing constraints in contrast is that if I look to be - I can say okay the uncertainty of b2 extended with the visibility range of my of my sensor by no means overlaps with a so it's extremely unlikely that a can match b2 and typically you ignore those cases so if you save whatever you take the two or three sigma bound of your estimate and then say everything which is out outside the three sigma bond I'm not looking for potential loop closures and so in this case in this example you would check if what the robot sees right now in a matches with what the robot has seen here in in b1 and if you find a match you may accept that but you don't even look into b2 so based on the current uncertainty of the robot and the of the Koran certain VD and certainty estimate that the robot has at the moment about places where there's been in the past it makes a decision should I look for loop closing constraint or not so in this case the views may overlap so really need to inspect my sensor data and see if I find correspondences between what I have seen in b1 and what I've seen in a but here's the fuse basically cannot overlap or the probabilities so extremely small that I kind of ignored that any question about this idea so far because we will need that later on okay okay how to obtain the uncertainty so how do i compute this uncertainty here before I said okay I can do that by estimating the uncertainty relative to the current pose of the robot which was here indicated with a which can where I could I can obtain by inverting the hessian the problem is inverting the hessian in practice this is a pretty expensive operation so you actually want to try to avoid inverting this larger matrix you can do an approximation what actually most system in practice do to do that more efficiently and this is a thing by they say okay we simply ignore the loop closures for the moment just for estimating the uncertainty here and you just do what is also called Dijkstra expansion so we expend propagate the uncertainties through the graph so where the robot is right now I assume a zero in certain and then I say okay I'm traversing through the next edge the edge which generate the smallest increase of uncertainty to the next node and it go on and always increase the uncertainty until I reach all the posts I'm interested in looking into but this does is ignores the loop closures so the uncertainty estimates are too big but you can still argue that okay uncertainty estimates I get are too big but I can compute this extremely efficient and I may inspect a few places too much but I should get all the places which I need to inspect in order to make sure I find the course with whatever 95 percent probability this is what is done just as a side note what is done in practice to what inverting this matrix age there so far we really tried to explicitly invert it used whatever Spurs to rescue decomposition or whatever techniques we used to not invert it explicitly and therefore what actually tries also to invert this matrix in the data Association step so it's just kind of a side a side note and what we do then we just check which areas overlap and if there's an overlap in the area between the current scan of alle the sense range of the robot here and the center range of the robot at that point I say okay this may be a match this may not be a match okay next question is how do I determine is there a matter or not so let's say we have assumed okay my current observation matches with the observation whatever 20 FC the place have seen 20 minutes ago how do I proceed and this strongly depends on the late data laser data or the on the data and here is one example of a very simplistic front end which will try to identify constraints which uses iterative closest point so scan met a former scan matching and tries to find a match so how could we do that so for the first thing we do we estimate the uncertainty of the other poses relative to the current pose of the robot so that's what we discussed so far given behind that we take those posters which are in the air we just made just a sample poses in in in those areas so just read we draw horses so if I go back to my example I take a okay I select okay B is a good is a good potential match so just sample random locations here close to B and what I then do from every of those sample points I apply skin matching in this case the iterative closest point algorithm so I try to align the current observation with the observation or with a local map that has been built in this place B and then I just see okay how well I'll do those observations align or how well does my current observation align with the local map I've built so far let's cook looking into the sum of the squared distances between those the corresponding end points those end points which I regarded as corresponding points and if this is above a certain threshold I accept the match okay that's it looks the same here I go that's a constraint that's a very simplistic technique of course you can do much better but this is kind of the basis the basics of most laser-based front ends today of course you may have different techniques please don't take a stupid threshold you may use things like ransack you may even use better initializations you may use better matching algorithms you may build local maps and try to match local maps in order to take more information into account but in the end the cordilla Mallis boils down to that again you can extend to improve all those aspects here but this kind of the basic decision based on a range data are two places or given an initial guess what's the transformation of the locations from which those two scans have been taken okay where do you see problems here that this were perfectly aware would you see the failure points for this approach so if your customer and I've proposed you this solution what would be maybe your argument against that would you buy that this approach which I propose you good what do you see where do you see potential problems of this approach from given what you have seen so far absolutely correct so if we have let's say asymmetric or partially symmetric scenes so a corridor which I can match this way or that way it's one example where this approach would say our perfect match great I got it looks perfectly every single endpoint matches perfectly I'm done maybe not that's one point which may be a problem and so we're so if you kind of it's an absolutely correct observation this can happens where would you identify this problem in this list of approaches so where's where's the problem over there yeah so it's a combination of the iterative closest point algorithm and the initialization so ICP depends on the initial guess and it just finds one solution and may be the right solution but as you pointed out in your example it may be the wrong solution so I see P is sensitive to the initial guess and as a result of that we may end up in a local minima so in something which looks like a match but in reality is not a match other things we may identify is how do i sample possible locations where the platform can be if this is kind of the uncertainty is a very large area I may need to sample a lot of different poses in order to do that efficiently so how do I find actually even a good initial guess and so these are typical problems that those approaches has or ICP is sensitive to the initial guess we have local minima we may have an inefficient sampling strategy so how do i generate possible initial guesses for my ICP iterations just accepting something based on a threshold as always something which some one might dislike and also as you said correctly ambiguities in the environment is a critical point for that so we're looking to those two things and I would start with that with just showing you three different examples of systems that we have built here in Freiburg oh I thought the first one or the last one are Freiburg we because this is not a Freiburg vehicle but some of the mapping techniques we developed here have been used to at least tested on that car so this is a pioneer a two robot which has a two d-day the rangefinder sitting here and sits on a pencil unit so it moves always like this song so it's called a nodding laser and this way generates 3d data you get 3d information about the scene and then it tries to build sorry a 3d map of the environment using this technique okay so let's start with this one the robot provides odometry and the later range scanner a 2d laser range kind of sits on a on a pen tilt unit we may assume the robot is standing still when doing it skins maybe it's driving while it's taking its games driving may make it a little bit more complicated if you have not a good odometry estimate because while driving you don't observe the same part of the environment and this makes it hard to make any incremental alignment and the maps you make it out of that look like this so what you can see here red means non drivable areas yellow means drivable area so this is building 79 this is the the street here it goes down to the parking lot so this is an example of a map that you might obtain with this approach how does it work okay we have the robot it uses odometry information so it moves forward takes a 3d scan and what we can then do based on the dormitory information and based on our local 3d scans we can actually build a map we can build a local 3d map of the scene this this map can be either a point cloud accumulated over multiple pulses multiple scans it can also may also be represented by a 3d grid structure or by a more efficient data structure such as an octree or multi-level surface maps or other map representations that you may have seen introduction to model robotics or that you will hear next term in introduction to mobile robotics hopefully how the local environment representation exactly looks like doesn't really matter at that point but that's one way that we built kind of a rigid structure which contains the 3d information that's kind of the important part in here and then the robot continuous odometry information drive five meters forward takes another range scan and builds another local map and then we can do is we can take those two local maps and try to align those two local maps it's typically Nanban so depending on how many scans you integrate either you could take these skins so if it's just kind of one 3d scan you would although in practice it's a number of 2d scans for matching it's much easier to have one 3d scan you can either take one single 3d scan you may even combine multiple 3d scans if you have a good pose estimate between those individual scans and sometimes it's easier to match full maps like versus individual scans this depends on the data that you have and how many ambiguities you may find your environment so if you have a local blast slightly bigger view so you really match a map against the map that may be that may be easier or may have less local minima so we match those Maps and get a six degree of freedom constraint in this case XY that your patron roll so six dimensional constraint and then we this is actually one of those constraint here then we can accumulate all constraints and then do a graph optimization and obtain a local map so this also a parts of this is building 79 building 51 52 the Mensa building and of course some parts here the green area which robot hasn't seen how does it align those two scans of these are two examples of two scans of two of those 3d scans and which have been taken from different positions and that's kind of an iterative alignment procedure very similar to ICP that aligns those scans so the approach here takes additionally into account it kind of separates areas that such is just a simple say classification or segmentation of the environment like and if I wall think that stick out and first met results against each other and only in the end it does the alignment of all points because you're less likely to end up in a local minimize so if you let's take the tree the pole and the walls and match them first you if you can separate those part of the skin reliably he typically are less likely to end up in a local minima and then in the end you get this kind of alignment so this the iterative procedure nice based on ICP which align those scans and then I can do this for a large number of individual scans and then I end up with a map like this so you can see you may observe some of those small stripes over here so these kind of darkest stripes these are simply small alignment errors of these individual maps they can they can see kind of small steps over here here that was also probably an alignment error which simply leads to her step which was let's say bigger than and all five centimeters in the ground and therefore it's classified as not reversible anymore and therefore everything is red over here but so what do you see here for example other the bikes which are parked over here and the individual trees here something it's like to have gone wrong or maybe there was some copy this was maybe a car coming here from the parking lot but this area over here seems to be aligned maderas or slightly misaligned skins but that's the way you can actually use 3d data to build a map of the environment the next example this is an autonomous car it has a 3d scanner in here on top so that's a valid I'm 3d scanner which is rotating and generate 3d point cards at a very high frequency and you can use a very similar technical basically exactly the same technique in order to align different pulses of the wiegel here you typically use an inertial measurement unit to estimate the relative movement of the of the vehicle because the cars of cost driving while the skin is taking so that you kind of really get more less accurate and points of every single laser beam even while the vehicle is driving making again put that into a local map and do apply exactly the same strategy as before if your initial estimate is better if your laser data is a high quality laser data then you get an example like this you have seen this picture already this is a parking lot or a 3d model of a parking lot where yellow again means drivable areas and red means non drivable areas and then you can actually use this this map over here in order to localize the vehicle in order to autonomous driving that was actually work of China that he built or he realized autonomous parking using this map representation which was built here for that vehicle in that parking lot well it's actually the picture of a parking garage and so you can see even though it's here three floor building and the corresponding 3d model where the car and how to heal it's a trajectory of the car doing the mapping process so stop it from here stop from here and build a map and so these were kind of in this case 1600 local 3d maps and this was this in this example done with the grid for 20 by 20 centimeter grid cells and this by lining those grid cells you can actually get maps off and say this quality that's something you can expect to get with this technique okay how does it look like if we look into cameras and maybe flying vehicles if you go to flying we go sweetie they have the problem that you have weight limitations so whatever those vehicles need to lift should be as light weight as possible and the more light weight sensors are to be the the more crappy or the quality of those senses so in this case was system which was flying on the prototype for a helicopter so never made it to the blimp in the end this was just a self assemble stereo camera system with two webcams assembled in a stereo setup and a small IMU there's an initial inertial measurement unit and one of the advantage of this system is it gives you also the gravity vector this quite accurately at a high frequency and if you know the gravity back door you can eliminate already two of the six dimensions from your state space because the roll angle and the pitch angle can be determined but just by knowing the gravity so you have only one angular component and XY that that you need to estimate and therefore adding this IMU to the tourist error system is is highly advantageous because it removes two degrees of freedom they estimate quite accurately you could even use this to estimate the the movement of the camera or get an estimate of the of the movement of the camera but this was sorry not used it actually here in this example at least so these are the camera I'm interested to really get so it's kind of a webcam off 2008 I would say it's about 2007 yeah these times this was actually the work of past times DITA his master thesis when he did that and these are the the piles you can see these releases of hires in our building these are the positive front of our building and the grass area so if you take the stereo camera pointed downwards and walk over the green these are images that you get and if you get if you see this image view you may already at it guess that it's maybe hard if you travel over grass for extended periods of time nevertheless the approach works surprisingly robust and is able to estimate the trajectory as well so how does it do that the first thing it does it extracts features from the camera images in this case these are surf features what these sir features actually provide is a local description of the scene of a small noise a scene of the small part of the image and and so if you see every of those for every of those points here one of those descriptive Alice is computing their computer kind of from a local window around them doing some local operations and returning typically a 128 dimensional vector which is a local description of the area over there in reality typically is in two step process first so-called key point detector is executed which tries to find could have let's say call it stable areas in the image so if you have two similar images of the same scene and you want to compute those descriptor values at the same position therefore you first run a keep on detector they typically look to the whatever corners in the image or am edges and compute those features at corners or at flops or their different ways of how this can be done and then for every of those detected points one of those feature descriptors is computed and what the approach then does it says okay I'll baste on for those points I try to estimate where they are on the 3d space I'm just taking the staring from stare information into account and based on the position of my stereo camera I try to build a local model of the surrounding so what we want to estimate is the X Y that and three angles your roll pitch and yaw so if you have this your camera looking forward this is roll this is pitch and this is your and so as I said before by adding this IMU to the to the stereo camera and the in this case the the camera is looking downward to me of the IMU on top we know the gravity vector and so this eliminates the roll directionally cursus will change the gravity vector and the pitch direction because it will change the gravity vector so by knowing the gravity vector I kind of get rid of the roll in the pitch and this reduces my problem from six dimensions to four dimensions for every node or for every camera pose which makes my life easier and therefore it is kind of exploited here so based on that based on that I can buy a triangulation compute where are the points in the 3d space give me that I see the points in both camera images I only need to know two features two of those descriptors with their corresponding 3d coordinates in the image in order to estimate the relative pose of the camera so in practice looked like this so this was the camera pointing downwards they say this these are images which and the red dots are positions for which those features have been computed these are my features in the map let's say this is the current image that our observe saleh bin this looks like this the transform it over here and then you can take this image and add it to you map or add a constraint between the position where this camera image has been take the current camera image has been taken and the images have been taken at previous points in time so practice these approaches in order to be efficient for example what they do is you first try to find so but you have you have your image you extracted your search features from your image and you have a database of all the sort of features that you have seen so far in the past what is the first thing you do is you try to make a nearest neighbor query in the descriptor space to try to find the best matching descriptors over your map or in those areas which are in line with the credit unit in the beginning with its A's and B's which overlap so either it's full database or only those nodes those nodes where there's an overlap and this cue can typically do that very efficiently with was a KD tree it's a data structure which allows you to search in log n time with a number of data points if you have a constant dimensionality so that's a very efficient technique to do that so based on this KD tree you can let's say I get the the best 100 features which match my current the current feature I'm considering I made and can do that for multiple features in my current image this gives me already a pretty good idea where I might be ok we said ok we need two matches in order to say in order to compute the transformation between the camera poses between two between so we have one set of features recorded point T 1 and 1/3 of features computed point T 2 so we need to the 3d coordinates of two features in each image so two corresponding peers in order to estimate where the cameras are under the assumption that I have the IMU so that they roll in the pitch angle is given so we choice we what we do is we take those matches and we simply sort those matches according to their quality simply start from the top so we start with the best match it matches we say ok let's take the first pair and with the first other matching pair in the other image and it's compute where the camera pulses should be given the under the assumption that this pair is correct and then we take all other features that we have seen project them into the from one image from projecting to the other image and see if there is if this the reprojection of the features as you met where we see the features so we kind of we take we take a pair of feature out computes the transformation based on them and then take all others in order to evaluate how good this proposed transformation was so super can repeat this process until I am until I let's say happier or a good pose that's been found and this is a procedure which is very very similar to Rancic it's actually a variant of good sake I think which was used here but this is just you you sample a few parameters that you need in order to compute the solution and then use all other informations to evaluate this solution and then you try that multiple times and see how often do we find a consistent match and that's the way this works this technique is used in three different ways in this approach the first one is for for visual odometry so there is no wheel encoder on the camera so if you take the camera on a flying vehicle or we take into our hand and moving it over the ground there is no odometry in the classical sense so there's no wheel encoder which count the revolution of the wheels and this way gives me an estimate of where the relative poses are so what you can do is you can do what's called a visual odometry so based you inspect the images and consecutive frames estimate the positions of features and then estimate the movement of the camera based on the feature that you see and the 3d location of the feature is exactly in the same way so this how we can actually generate odometry information although we don't have a physical odometry information from wheel encoders the thing technique how you can use this is actually for matching your current observations against a small part of the environment so under the assumption that I'm I know where I am in my map and I have a map I can actually match my observations again to the map so it's good it's kind of a localization step so the robot its reach traversing an existing part of the environment has a good estimate where it is that is what we refer to as localization and the last part which is loop clothing so given I kind of I don't know where I am some a large uncertainty and I you can use this approach to see how well do the features that I see at the moment mattress features have seen in the past and try to find an alignment for this this is a good alignment you may accept that or you may try this for a couple of consecutive frames that not just kind of one bad image screws up everything so you may accumulate or wait until you find a few consistent constraints in order to integrate a match that's what's done typically in reality there's an example how this looks like so this was busted either in 2007 this is a stereo camera kind of on a fishing rod so that it's kind of simulating a flying platform which we didn't had at that time and these are they this is kind of this is the current camera image that you see the feature that were extracted and what you see here is a map that the system builds on the fly and that's how that looks like so walking over the green and the back of our building so today this is all butters and trees over there so it's quite a while ago right you'd still hear that the visual odometry although the individual images look actually rather crappy you're able to get at least and incrementally not too bad estimate of your trajectory although it's globally pretty wrong which you will see in the near future so here the estimation looks a little bit better on the tiles because you get better features to match a few seconds he will return to a place where he has been before so you can see actually he approaches it's actually in reality here so there's a there's a substantial mismatch there no features that are matched at the moment and as soon as he goes back to its starting location you will see that it finds some of the features which are consistent you will see that some constraints will be added to that graph so this is a place where you started there was kind of a book of arrests of reference you can see here and some images and even found constraints along the grass because in no way he was walking and tries to walk along the same area again these ads you all these kind of loop closing constraints or these constraints these are not constraints weather system real localized is a blue one because it's moving back on the director so you can apply the optimization so these are the individual steps of the optimization and get out this trajectory with the corresponding images and feature locations that the system has observed can then overlay that with our building that was actually Google Maps image of 2007 so the quality was bad today the quality is much better you see it doesn't fit perfectly so here think he walked in between here but the rough estimate of this trajectory actually is similar to where he was actually walking you can actually evaluate that even better so the same experiment he can do indoor where you put in known markers in the scene or markers had known positions again these boxes we can see here then the stereo estimate so this is just kind of taking the three positions of those features and doing kind of a mapping beat the textured image on those on the 3d points it was kind of one and a half by five row by ten meter where different markers were placed on the guard which were measured and this was measurement tapes or not the presely perfect high accurate measurement but let's say up to a centimeter or something like this and then you can actually revisit the same place in this way close the loop and then estimate where are the marker positions in between the map and reality as you can see here all the constraints of the system found does the graph optimization and turns it into an estimate of the environment and this is the resulting map you can see so this is the trajectory that the camera took and here this just kind of the Reaper this was one of those boxes was just taking the camera images and projecting it to the or mapping it to the 3d point structure because we have the 3d post information only for those feature points so we don't have that for the overall image you can actually overlay that so as x61 out by 10 meters and you could actually see that the estimate so this is a ground truth information the green lines and this is the mean and uncertainty that was estimated so this scene seems to be a somewhat consistent estimate you may see a small bias it's not centered around zero but given kind of this self-made stereo setup just excluding two cameras together that was actually a very nice result then you can use the same system for example on a blimp this was an example where we used it in the end God used here was exactly the same approach but only a single camera and the SONA which was measuring the depth information and so this was a blend and the tas of the blimp was always to hover on top of this location which was here marked by the book i think somewhere over here or here and so it's always trying to hover and whenever it hovered this is the hovering location someone took it and throw it away so robot was going somewhere else building a map of the scene trying to find again a place which has seen before to read localize whenever tree localized try to estimate where it should go in order to build a map in order to find back hits home pits home location and in order to hover over this place and also you see here whenever someone opens the door there's a lot of wind going in there so it's actually a little bit tricky for the platform to always hover or people walking by and yeah and then someone again pushing the platform away so this was actually there's an example where one of the these platforms uses the map in order to solve the navigation Tasker builds a map online and use the map in order to make navigation decisions of where it should actually go and so it's an online process which requires us to built the map to update the map and always come up with an consistent estimate of the map in order to generate steering commands which always guide the platform back to the desired location in this case again it should hover here at one location okay these are two problems with this we discussed or we said how can we solve that so the ICP is sensitive to the initial guess so one thing you can do is try to find arrange things into Maps instead of single scans this helps or we can separate the local perceptions into some parts let's take this wall so there's obstacle that you stick out and try to match them first we are less likely to end up in a local minima but of course there's no guarantee for for doing that and also the inefficient sampling strategy if you have descriptors like feature descriptors it can actually help you to find good estimates where you can be so you don't have to try all camera polls and see if the camera poses match you can use your feature descriptors to already pre select images you may consider for for potential loop closures so it was kind of the first part of the talk which I Neff was more over more whatever like wait overview about how different approaches work was not going to too many details and the second part of the talk today I would like to talk about ambiguities in the environment and what are good ways for dealing with ambiguities and how can we actually even though we have environments with ambiguities build accurate maps consistent maps of the environment so they are are so or the main assumption here is not we simply ignore all n big you T's and say the environment has no ambiguities and I just consider they are none of them if you say there may be ambiguities and how can actually deal with that and this is one of the approaches which started nicely so to start with the motivating example so again we have our example with a and B this is the place a which is the current view of the robot let's say local view and let's say I have a map and a place B say it's actually place here looks pretty similar to this place over here so our a and B the same place in the ICP based matching approach would say okay let's sample some positions here and then try if you can find a match and actually we will find a match here so this will mesh this structure here quite well for an ICP based alignment technique but we could always argue is this the same place what's the problem can we can we make the statement is this the same place or not in this example why not especially here is a white area a lot of white area exactly so we only thing we can say is that a and B might be the same place but there might be something else which looks exactly than in this place a here so it may not be a good idea to add this constraint unless we have seen all this part over here actually if you look at to reality that's how the structure looks like and these are three matching hypotheses and this one is a correct one so in order to make this constraint view maybe it's better to first explore all this scene over here before we can make this data Association and this is something which is called global ambiguity or something so there may be different places where the system can be which are just which are which do not intersect with the place I'm currently considering and therefore I should not do a match but you could you could I mean this was just an example on that you want to make sure so they're two ways you can do that the first thing is try to make sure that you close your loops as early as possible so don't let the uncertainty grow so much and the area smaller it's more likely did you find a match the other thing is you could simply cover the whole area so that the answer the ellipsis one day and once you found one where it matches then the uncertainty of all other through decrease in this helps the system but so we're not doing here the active approach of Explorer and how to explore in order to build a good map so this was just kind of a floppy statement that I made here we are really looking just into slam so someone is steering the platform and we just want to make the decisions is zero constraint or not so maybe a bit imprecise from what I talked before so there here is a global ambiguity which is something we don't want to have and they are they example if the uncertainty is small and I have this Metro they say okay that looks good that's kind of what's called global sufficiency so the opposite side so this is a globally sufficient match so there's no other place in the uncertainty area of that node where the system can so there's no other place in here we're able to fit in these are the things I am interested in finding exactly those it's all situation and say simply the match can't be anywhere else however there's a second problem with ambiguities which is called a local ambiguity because if I introduced a global ambiguity there's quite likely to be a local ambiguity and this is for example this structure here you have the corridor and it's a kind of the part of the doors or they whatever small pillar which holds the door in torch to its Agha you can see here and the actually extreme outsi multiples of them in the map and in your in your scan because it's a repetitive structure so you say a is either here or here or here so this is my uncertainty although there's no other place where Amen just in this uncertainty ellipse there are multiple hypotheses how it can match inside and they overlap therefore it's local the other ones non-overlapping its global and this is this is our overlapping matches is global so I don't know how this a fits in here so does this guy over here fits this one this one or this one so either here here or here simply something I it's hard for me to you to to identify and this is also called what's called the picket fence problem so good offense you seem you don't know which part of the fence matches to what you see so far it's a very very long repetitive structure and these are things where you also don't want to add a constraint the curses simply do not know is this is this locally ambiguous or not if it is just really don't want to add a constraint we say can I say it's either here here or here what you could do is you could use the max mixture approach and say simply it's a multi-modal constraint that I may add I'm either here here or here but at that time when this approach was proposed there was no max max mixture yet and therefore it's not foreseen Oh Lord was actually the same author I've been all since ethanol in this group who develop this approach and later on max mixtures so this would be one nice application for mac semesters but assuming we don't have max mixtures we have to treat those things separately okay and what can you we can actually do two tests the first one is a global sufficiency test so we want to say there is no possible disjoint match in the uncertainty lives it means a cannot be completely somewhere else it's not possibly that a can be somewhere come at a completely different place and the second thing we want to have local unambiguous so there are no overlapping matches so there is a is either here or somewhere else entirely so there's no so I cannot by just rearranging the scans a little bit find another good match these are two things I want to white or I want to make sure that both constraints are both conditions actually hold and the approach that every Doulton and his team proposed we're saying okay we have all slam back and it gives me an estimate into the prior the current estimate of the of the graph that I have and I do a it does a post poll scan measuring very similar to what you've seen so far and based on the scan matching we can actually do a topological grouping so pulses which are nearby which are in the same part of the environment I just grouped together it's kind of can see this is a small local map Zushi nothing I think they had about that just pulses which are nearby I kind of can be grouped together into kind of a local map but very similar to the hierarchical post graph approach of doing that and then it does two things the first thing is it tries it tries to find within those groups of those topologically grouped nodes it tries to find consistent constrain so how many constrains I can find in there which are consistent among each other if there some say either I'm one meter to the left or I'm two meters to the left there's an indicator for this picket fence problem we can be like in the corridor it can be either here a media for word or two media for vocal three media forward but nothing in between and so it means I find typically a lot of constrains in this local group and uvf can find subsets of those constraints which agree with each other and others which don't agree with each other which don't agree with the other solutions and one the first step is to trying to identify this situation and the second thing is once I've eliminated that I do kind of a global ambiguity test which is much simpler which basically takes into account what's the area that I know given the uncertainty lips that I have and is there another local area of what I've seen which would fit in there and still will allow for match and with this with getting rid of those two doing these two tests we can actually eliminate a very very large number of false positive constraints so that's actually one of the state-of-the-art systems which were used in slam front-ends in order to boost in slam front-ends and also runs in I would say most of all the mapping systems that we use here in Freiburg runs this system in order to identify or in order to filter constraint that system may have found based on the sense observations and say this is it seems to be okay or this seems to be a wrong constraint okay so this is a criterion 1a the second criterion for the for the local local ambiguity and global sufficiency so or local and ambiguity and global sufficiency it'll be so these two tests I have okay we would like to go through these three steps over here the first one is the topological grouping which is easy to be done so I just take my post graph and I just take okay which poses are nearby and then I try to match all of them so this is kind of one local match group and a second local match group I just see if I can match all of them you know for example pairwise fashion whenever I'm my Metro says Samuel it could be a good fit maybe yeah doesn't sound too bad just add them to kind of a temporary constraint list and this is shown here in red so this one can Michigan this pot this post this post is against this opposes both this post and this guy again these two poses some of them will be likely to be right some of them are very likely to be wrong the questions how do I identify which one a right image or not wrong again the first thing we do is we want to test for local unambiguous so we take one of those groups and check okay is here do they is here the risk for picket fence problem how can we do that okay so if I match all against all and at all like this one exists example over here so I match all against all and those which are above a certain threshold I keep the other thigh removed it would optimize this map I would get this situation over here you can see here they're kind of walls which are dublicate it so it's definitely an imprecise map of the environment if I however find only those which are the right ones which it leads matchings over here if you get a consistent map of the environment and based because of this structure over here you can see this structure in this structure and he actually structure may be seen again as similar you have this picket fence problems picket fence situation over here and the question is how do we go from here to here that's the kind of the thing we want to answer now it makes its decision which are the local matches and is there is there an ambiguity in local ambiguity or not if not I'm happy if there's a local ambiguity I may say okay it's better to not let a constraint okay locally consistent matches how do we actually get those locally consistent matches that's one of the key questions here the the key trick in here is we have a large number of constraints pairwise constraints between nodes and we want to find we want to check in to how many consistent subgroups are there so if I can assign a kind of a group ID to every constraint and the goal is that among one group within one group they all consistent with each other that's would be the perfect thing but maybe I find two groups which where within the group they all agree with each other but not between the groups so between Group one and group two they say there is a different transformation between the posts of the sensor but within a group they all agree and this is one of those situations where I can see all which is an indicator for this picket fence program okay how does it look like okay given we have a few or two sequences of a trajectory once we're the first time with it the place I just saw here in red and the second time in the robot visited the place here shown in blue and I get those matching constraints in this case so H I and it's J isn't just two hypotheses of four constraints how that can look like and the the idea is to say okay in order to check which are consistent I need to check if they kind of transform the environment in the same way and it's done here by taking what we call the prior edge of these are those add edges which result from odometry or incremental scan matching if I start from this node over here I can take my little madama tree constrain to go here I take my constraint HJ to jump into the second trajectory for the point in time when I visited the place a second time move along the odometry again and then go back kind of with the inverted H I and go back to the same place so if I have this kind of loop of constraints if they are all perfect and agree and consistent I should add up at an identity transformation if I concatenate all of them of course I'm kind of starting at pose one in the first time series go to post two in the first time see in the first visit then I take the matching constraint and I walk through the time one at the second visit go to time two of the second with it and then go back to time one in the first with it and only if those constrains agree with each other to end together with the odometry information that was collected I will end up at an identity transformation and so the trick is now to use all those pairs of constraints and see which one end up with with an identity in which was the biggest group of consistent transformation that I can find in here you look a bit skeptical of course you will add up with something which is close to that energy matrix so I'm happy with of course and I'm not enforcing an exact identity matrix but Sampson theme aligned because the similarity or how far away from your identity matrix simply depends on how accurate is your dormitory information and how accurate can you actually align your scans so of course Moodle oh that yeah okay so I have whatever a number of those hypotheses what I can do is I can actually build up my matrix a I J where this simply depends how consistent are the hypothesis using the hype of this I and a hypothesis J together with the odometry so this is this is IJ so I just cannot make this walk around in my graph and say how close am I with respect to the identity if I'm closer than you say that's pretty good if I'm far away from identity as they are something has gone wrong here and this is a well you which I get in here so the probability that along this loop IJ which takes a positive ion a positive J we end up adding having an identity transformation given those two constraints and these are welders which are which are sitting here so as they are high values if they say it's very likely as I'm at the identity that's a bad thing which can happen to me or I'm getting a really low score close to zero which basically means I'm far away from identity we may use just a Gaussian about how far I am away you're away from the from the identity so what you end up you have a matrix with those values in here and some values are have high well we have some elements with high values in here and some elements with small values in here and this is just the matrix which tells me how well do individual hypotheses agree with each other pairwise okay so clear to everyone what this matrix means because we will need it later on and if you don't understand what the matrix means it will be hard so every entry of this matrix IJ tell us how well do hypothesis hypothesis J agree with each other just looking to this this pair of it's just a pairwise consistency mention the small if they're small Wireless in there I mean they don't agree they are high values and Daisy but they agree that may be good again this so far hasn't helped me to identify these groups or if they identify if they are different groups just says which pairs are consistent with each other okay what I now can do is I can define an indicator vector which does consist of zero and ones and the indicator vector is one hypothesis for consistent constraints so if I have this indicator vector here every element here so it consists of zero and ones and if I have a 1 here at the field I it means that the assumption that H I is correct and if there's a zero of it if AJ is incorrect or it's not correct so I get effect every vector consists of zero and once it is one hypothesis about the consistency of my matches if I have in there one one one one one one one one one it's pretty good that means all of them agree with the or all of them are correct if all of them are zero which means it's completely incorrect it's just one so the goal is just to find this vector and then later on find a way and how can we determined what V this vector be all right any indicator vector B should look like but so far the indicator vector just says if there's a one in there it's correct if there's a zero in there it's incorrect and now comes a trick we combine this indicator vector with my matrix so multiply this vector with my matrix if I multiply this vector this vector V with my matrix a because for that positive set age I is correct and maybe H J is correct so if let's say I and J are one the rest is zero it will directly take out the corresponding value of my matrix a for the consistency of I and J so if I do this operation I say my indicator vector we transpose times my matrix a times my vector V this gives me the sums of all consistent hypothesis according to the indicator vector if they are all of them are one at the sum of the elements of the matrix if only two elements are one and all of their I'll just get this is the corresponding element out and then I divide it simply by V transpose times V so the scalar product of this vector V which is simply the number of hypotheses which are correct according to this vector so it is just an average of course it's consists of zero and once they're the same vector so whenever both elements are one I just add plus one so what I get in here is this is the sum of all consistencies between pairwise hypotheses in this indicator vector divided by the number of hypotheses so that's kind of the average pairwise consistency so now given an indicator vector of V I can compute this quantity here and this gives me a score and the higher the score the better so if I have a group where everything agrees I can add all the ones once once once once in this vector and I will get a high score if I have ever two groups in there I have I get one I get scores among the groups but not between each other so we get and I divide by again a large number of apostasy so we get a small value so I have this function just high values for both elements and low values for bad hypotheses so what can I do again treated as an optimization problem I try to find the vector B which maximizes this fraction that's exactly what is done it's okay my lambda is now a function a lambda as a depends on my variable indicator vector B and I try to maximize this expression the problem I have in here is that my indicator vector V has a constraint that only allowed to take zeros and ones in under this constraint that is an np-hard problem this is a corresponding densest subgraph problem which is an np-hard problem sort of find the best v actually need to try out all possible solutions which is something which for a large number of constraints simply doesn't work out therefore the ID years okay III know how to compute it but it's to computationally demanding to do that let's see if you're trying to find the approximation out of that and actually find a pretty good approximation for that by saying okay I simply don't treat my vector V as discreet I said I just allow continuous variables because then I can actually optimize this and then get a solution and the end I simply round to 0 or 1 it may not be perfect but it's actually under the assumption that the continuous problem reveals a similar structure than the discrete problem I should come up with a very similar solution there's no guarantee that this is the case so it is definitely an approximation is a different problem that I solve if I go from discrete value from this credo from zero and once for binary variables to continuous variables but the assumption that I'm not too far away but there's no no theoretical guarantee for that ok how do I maximize this function i compute the first derivative I set the first derivative to zero they don't want to go into the details how the derivative is obtained but it turns out solving this equation is equivalent to solving this equation over here if you may look to this equation over here for symmetric matrices a so that's this formula look familiar to you exactly it's the eigenvalue problem so matrix a vector V is equal to a scalar times V so in order to to solve this problem I simply need to solve an eigenvector eigenvalue problem so I need to actually solve this problem over here of finding the eigen vector and the eigenvalues and and just solve that and forever solving technique I want to use for determine this I could use the SVD that we discussed for example with the diagonalization in whatever a few months ago when we when we discussed this I don't know in which context we did that but we did that in the course or you can solve the Christic polynomial I can values and set of eigen vectors and what do you what we then do is we look into the and so the the vectors V that maximizes this equation the original equation is kind of the maximum consistent subset or the average maximum cost is some subset and if I have multiple solutions for that get MA multiple so if I have multiple solutions for this eigenvalue problem this is simply they are multiple maxima in my in my problem i can actually inspect the individual eigen values and eigen vectors and check how lovely the the eigen values are and so the larger the eigen values are the better the score so there's a proof that i get a perfect combination I get a couple of eigen vectors with current putting eigen values the larger the eigen value the higher the score and the eigen vector allows me to say okay which which of the constraints are switch on and off so if I visualize this so one situation over here so this is kind of the first eigenvector second eigenvector third force this is lambda i if this is kind of a high value this is a low value low value low value low value like this would say there's one the first solution which corresponds to lambda 1 V 1 so the indicator of 1 V 1 which gives me the score lambda 1 which is the highest score I get this is kind of the best solution that I have and all other solutions are much worse in performance if I say however ok I have this example over here that means solution 1 2 & 3 perform more or less the same getting more that's the same score so in this case lambda 2 equals or approximatively lambda 3 which is approximately lambda 1 and I have different indicator vectors that means I have three different solutions which give me mall as the same consistency score they could four and this is exactly the identical for one of this picket-fence problem I have different consistent subsets of constraints which might be don't agree with each other so the only thing I need to do I need to if I want to say is it a picket fence it's a picket fence here so if you have more than one solution I can just say okay what the ratio between the largest eigenvalue and the second largest eigenvalue and if this is a value which is let's say 1 or between 1 and 2's again yeah this is very likely to be a picket fence prom because I have two solutions which are similar if this is a value which is very large so much larger than 1 or 2 or 3 or whatever I choose is my parameter in there it means it's very very unlikely that just because the second solution is much worth than the first solution still this assumes that might seem a topological grouping is done in a good in a fair manner and the includes all the relevant constraints but under this assumption that's exactly what I get out here so what I do is I take compute the first eigen value in the second eigen value and I compare them and let's say if lambda 1 divided by lambda 2 is larger than 2 then I say okay then I regard the solution 1 as locally unambiguous that means there is no picket-fence problem where is the high probability of course still may be the case I made a decision but that's my assumption here what have they need to do it I need to discretize V 1 to 0 and once in order to find my activation it's 1 point in here typically the eigenvector is normalized between 0 and 1 so of this if I would just round the vector be born I would end up with probably having 0s everywhere but that is easily solved because I just multiply this by a scalar which is a kind of a 1 e so a kind of a constant times V 1 the questions I just need to find the constants constant which make which maximizes this function over here so just a kind of 1d search problem in order to do the disguise ation of course if I compute the eigen vector or eigen value problem the eigenvectors are normalized and I don't need it normalized I really want to do the discretization so what I do is I say ok I just know I'm somewhere along one vector as lies my solution it just can have a 1d search problem in order to find the solution for this one of course I know what V is it's just kind of V times a constant C and just to determine this constant so that this expression is maximized but V is already given so that's that can be easily done okay so now I solved the local ambiguity problem or at least I'm able in a situation we can say there is a picket fence problem here or not so it's actually the first thing I wanted to check is there picket fence yes or no and that's actually a very very nice way for identifying that the same thing is quite easy ernet quality the second question is so this is one potential match is there some area around this area a within this ellipse so they would fit a second time in there so something I haven't seen for example well I see enough white areas that able to fit in there second time this is the case I mean say I can't be sure that this is a concern it could be a constraint but I'm not sure that it really is a constraint because there's simply an area in the uncertainty on the relevant uncertainty area which I haven't seen so far and what we would in theory need to do we need to really check this area I can see if we can fit it in here an approximation for that is just compute the just compute the the size of the ellipse on this circle and the ellipse over here I compute the again the eigenvalues of those two which tells me so the eigenvalues are the same for x and y means it's a circle and the larger as one dimension the larger they the main axis is and it just can compare if the smallest eigenvalue in here is larger than the largest eigenvector eigenvalue of this guy - the one of this guy there may be possibility to squeeze it in there just by just comparing the eigenvalues along the dominant excess of those ellipses I can actually do that again if a very very narrow matches this may very very narrow corridors this approximation may lead to a failure because it may be able to squeeze it somewhere in there but that's not the case for most examples so this sense I kind of completed the pipeline so this was kind of the more the thing we spent most time on and try to solve this criterion - for the test of local an ambiguity and if I find then if I find the picket fence promising we say I just abort and say don't add a constraint if it passes the tests I do the global ambiguity test or global sufficiency test so it's a globally sufficient order they say a global ambiguity if there's number you D don't add anything and otherwise I have a loop over which I can say with a high likelihood it's globally consistent and there's no local and acuity and that's a way actually most a lot of different slam system works also the SEM system that we use in here you've seen this video already where the system we have the robot mapping our campus over here so we start in front of our building continue moving and this one exactly does this so you can see if the area turns red this is actually the place where it looks for potential loop closures based on the uncertainty and then it tries to find them and collects a few of them groups M sees that there is a picket fence problem see that there is a local and B an ambiguous situation on a global scale and if not it adds the constraints and in this way builds a map of the environment of this kind of basically the front end hears and reimplementation or an implementation of the method of ethanol snow to make this test over here and we found that as very successful techniques in order to solve the same problem and not add false positives constraints or add only a very very small number of false positive constraints okay so to conclude this talk today I know that was kind of compared the front ends to the back ends it was kind of much more back ends than this lecture than front ends but at least I try to give you an idea on how front end works in the end it strongly depends on the sensors that I'm using and I need to the better I can exploit the individual properties of my sensor the better it is and the so how this is done exactly strongly depends on the sensor it's hard to give kind of a general framework for that but the approach of this global ambiguity tests and the local ambiguity test these are important things that a good front that should consider in order to avoid adding false positives constraints and this is done with a single graph partitioning approach this was kind of the techniques there the technical - Olson which uses this and yeah again so regarding the the position uncertainty of the platform the higher the uncertainty is in an area the moon bear I need to know the area in order to make the decision easier is this a global is their global ambiguity or not it's kind of if the uncertainty lips of speakers need to seen all that area to make this is no that's the only place where I actually can match and about the special clustering technique for the original paper where you find all the information here's the work back in also recognizing places using spectral clustered local matches this is exactly the approach that I presented here and actually a couple of the slides that I use in here or of the images material at least comes from a tin Olsen and thanks that van for allowing me to use that and that's it from my side for the front ends and the only thing I would like to do authors lectures give a very 10-minute summary of all the most important things of the lecture that
|
SLAM_Course_2013
|
SLAMCourse_01_Introduction_to_Robot_Mapping_201314_Cyrill_Stachniss.txt
|
okay so then after this kind of production what you're going to see here um a shirred deduct the course so what I would like to tell you today or what the goal of this lecture today is not that you gain any very in-depth knowledge about something but you should have a pretty good idea what you can expect from that course so what are the topics that we're going to cover why is it relevant what we are doing that's something you could actually be able to see today so first what if you looked over the title of our course which is robot mapping the first two questions which may arise what the robot what is mapping so robot is first device which move through the environment and in most cases this device is equipped with sensors actually all robot that we consider here equip the sensors also there are robots which are actually basically free of sensors of so poor sensing capabilities that it's Mullis sensing free but that's nothing what we're interested in here so we look into mobile robots that move to the environment that means they have some kind of wheels some kind of legs some kind of wings or propellers or some means to actually move through the environment so we're looking into mobile robotics here mobile devices that drive through the environment fly through the environment swim whatever you can imagine and the second thing that actually we look into here is devices with sensors so some people call actually what sensors on wheels which is actually in some sense true depending on the application that you do it may be sufficient to just have sensors on wheels and some very limited computation power depending on the application that you focus on you may want to have a more powerful robot but the most important things that we actually use here in this course are big perceptions and also the controls that are sent to the robot so if the robot says drive a meter forward and send this to the motors and the motors drive a meter forward or approximatively a meter forward that's a valuable information whenever the robot has a camera and takes a picture of some part of the environment or it has a laser scanner installed and gets proximity measurements to the closest obstacle that's kind of our sensor information and we want to exploit the controls and the sensor information to gain knowledge you want to gain knowledge about the that's where we come to the second part of the title which is mapping we want to look into robot that actually build a map of the environment that can be map it's kind of a very flexible term that can be some representation some model of the environment doesn't need to be a metric map that you may know or your city map or your subway map that you're used to a different kind of Maps geometric topological maps it's just some representation that the robot uses in order to model the environment gain collect knowledge about the environment and the end uses for decision-making any question at that point okay so these are terms which occur in the context of robot mapping some of them are closely related to robot mapping some are less related but still influenced by that course so they start with state estimation state estimation means we have a state let's say the position of the robot in the environment were the position of the landmark in a given reference frame and we want to estimate this quantity of the state the state the world is in the world state we want to estimate it because we don't know perfectly we just get sensor data the sensor data is noisy most of the cases actually always the modes can be very small but often the noise is not that small and we want to estimate that state so for example we want to know where the robot is or we want to know where lent market from the environment so this is what often said with state estimation and one way to do state estimation is a base to the reclusive base filter that you may have seen so if you give noisy observations you have nosey controls the our techniques we'll look into this recursive based filter in more detail next week it's one way to do state estimation so to estimate what's the state of the world the world can be whatever you want to model about the world so the position of the robot the position of landmarks whatever it is you're interested in this is closely related to the second problem over here localization localization is actually a part or an application of state estimation in most cases it means we want to estimate where is our device our sensor our robot so where is it in the world we want to estimate the location of the robot we often talk about the term posed here which is often referred to as the location x y coordinate for example as well as the orientation they are the angular orientation of the robots where the robot look which direction of it look at something which is often referred to as pose where its location often means just the XY coordinate this is different in different fields depending on whatever in robotics it's kind of coined into a kind of a standard to do it like that with other fields other fields which do mapping which are related to what we're doing here then we may use a completely different terminology but that's kind of let's a standard thing most people use in robotics okay then we have mapping mapping in the robotics sense is often referred to as kind of having sensor data and I want to estimate the model the model of the environment so I want to map the environment in a lot of aspects this mapping means that you know where your sensor is this is not the case in the general methane framework but in robotics if you talk about mapping we typically mean we know where the sensor is we just want to estimate the quantities that we see for example we we know the position of of a sensor here and I wanted for example with this laser measurement device I want to measure what's where is this wall with respect to the position of the of the laser pointer here and then that's a measurement and I just want to estimate where that wall it's something which is mapping if I don't know where the sensor is that's typically a combination combined problem which includes localization telling where these devices and mapping telling well that wall is so therefore in robotics often people use it to remapping assuming to know the position where the robot or weather sensor was and if you don't know that then you actually end up in slam slam sense for simultaneous localization and mapping that means you want to simultaneously estimate the position of your center of your and the state of the environments of the state of the environment therefore people sometimes or often distinguish between mapping and slam also if you talk in sloppy terms mapping would mean everything because they and you are often interested in the map of the environment therefore the course is also called robot mapping also it's a bit sloppy into in the terminology that we often use in robotics okay the next thing which is important to navigation navigation means a heavy device or robot which can make its own decision and maybe I tell that robot hey go whatever in that corner over there and the Robo needs to decide they actually navigate he around here or should I take a left and then turn right over there to end up at the desired target location the vacation is not a topic that we are going to address in this course here but navigation strongly relies on slam were on mapping on having a map went on also knowing you are required to know where the vehicle is not apply in a pass and navigate somewhere so navigation is not explicitly addressed in this course but one of the key motivation factors of that courses that we want to have models about the environment and estimate the location of the robot in the environment in order to carry out navigation actions we may do other things we may manipulate objects with the manipulator and we the robot may not move at all then you also need to know what the environment looks like around that robot so whatever decision you're making it's often very helpful to have a model of the environment at hand that's kind of one of the motivations for that course here and then motion planning is kind of very tightly coupled with with navigation so you really want to plan where to go which may satisfy some constrains amazing to find the optimal motion sequence to reach your goal location you may plan motions of it as I said manipulators before so it may go even beyond whatever just moving a rigidbody in the XY space can be more complex but again that's nothing we are going to address in this course but that these are techniques which actually are benefit from having an environment model so we will actually look into these four aspects here where slam is the most important stuff that we cover but if we solve slamming also soft vocalization we also solve and mapping and in order to solve the slam problem you need state estimation it's kind of the rough picture that we are going to address here any questions so far is please yeah yeah so as I said in motion planning you can actually plan the trajectory or a sequence of states that the system should be in in order to reach a certain goal configuration so it can be manipulators let's see more armours here and they arm should be how should I get get my arm in this hole over here that's a big emotion playing from you wouldn't typically call that navigation navigation is typically device which moves in the X Y theta X but rotation or and three-dimensional space of physically guiding a robot from one location to another location but they are very tightly coupled so most navigation system use motion planning algorithms like never a star or variants of a star that you may have used that you can use for ups for guiding vehicle from location a to location B but is not restricted to that you can also use planning for other kinds of problems that you're going to solve so it's kind of what I told you before was kind of more the core robotics perspective of that but generally you're absolutely right any other questions okay so let's go a little bit more to the details of what a slam and kind of be a bit morph but not yet formal but formalize it a bit more so we end up in a more formal description in the end so what we interested in is actually computing the different positions of the robot at different points in time it's kind of a trajectory or a path that the robot took as well as a map of the environment or model of the environment there are different types of environments environment models that you can imagine will just catch them on a high level at least during this course today and but that's kind of our main objective and then we have the problem recipe for localization which means estimating the robots location so this actually means estimating the trajectory so where has robot been at different points in time and then we have mapping which is actually try to build a map so yeah whatever the environment model and slam simultaneous localization means doing this at the same time and in reality these are localization the mapping problem um I don't want say they don't exist but in the end you always need to solve the slam problem unless you have some external absolutely great whatever tool sensor if God told you where the robot was at different point in times yes you can do mapping and if the robot told you exactly what the environment looks like yes you can do localization but quite often these are things which strongly interact with each other so if you don't have a model given by someone which you assume to be right you typically need to do some form of slam to solve both problems at the same time it may be that the environment model that you have is sufficiently good or sufficiently accurate they can say okay I ignore the rest and just do localization this case it exists or if you say that's such an accurate whatever X my robot moves on I don't know a rail and a perfectly know it is on the rail with some other means that I can focus on the more challenging task of just modeling what the environment looks like in this case you can also separate that a little bit but in reality you are mostly faced with simultaneous localization and mapping even if you're only interested in what the environment looks like or where the robot actually is okay so just to make the point why to illustrate the different properties of these problems so what you see here is a robot a different point in time so T equals 1 T equals 2 T equal 3 the robot move through the environment on some trajectory we typically don't consider this is in continuous process but at discrete points in time say every second or every time the robot takes a sensor measurement and then we have other things here which are these stars and these stars should illustrate landmarks that means something the robot can observe and can recognize so if you have a robot which drives around in front of all university on the green you will it may have a sensor tech trees so every tree will be a landmark so then you have whatever 23 is here in front of our building then you have 20 landmarks and the robot may estimate the location of those trees of those landmarks and then it can also estimate its own poles relative to these landmarks ok so these guys are landmarks and these dashed lines here should illustrate observations of those landmarks so the robot is here it can see these two landmarks that are going to see you can see only this line mark if they're able to see it can see these two landmarks and this small dotted line should actually represent the trajectory that the robot was taking and I'm just considering these three points in time okay now we look at the localization localization was we know what the environment looks like but we don't know where the robot is it means we know the position of those landmarks so God told us the position of those landmarks so we know those stars are here and then we need to estimate where was the robot so we add es to estimate those poses and this may look like this so in the beginning the robot actually observed that he acquired well then the robot was driving and it can you should deviated from its path so this is kind of the the true path sorry it's a true path and this is what the robot estimate so it's kind of the grayish ones or here the estimates of these two things so here the robot thought it turned left but in reality it turned right so maybe the air or one of its wheels lost air so the diameter diameter of one wheel gets smaller and the little bit gets drift to one side maybe one explanation for these stupid behavior here turning to the left Lindner what makes an observation is okay here that I'm actually actually be here say hmm but there is no landmark so the line works actually here so there's kind of a correction step that actually drags the robot towards its real pose saying where the road was si si the leg mark is one meter away so did my measure but according to the map it should be only 50 centimeters away it's okay there is a mismatch between what I measured let's exhale this kind of what comes from this correction down here and then the robot continuous driving and then it make another correction may end up here at the current pose at the end that's kind of the localization problem for what rights the environment you integrate its control command you get an estimate of where the robot probably is then you get an observation or what sees some length marks in the environment and then can kind of correct its pose and the correction or the amount of Correction depends on how accurate is it sensor how accurate is this motion execution if a very very accurate motion execution but a very very stupid or bad noisy sensor you will probably trust your motion has to make much more than your sensing data the other hand if you have a very poor sensor for example elect robots so the humanoids say - it's pretty hard to get a good odometry or post estimate from the way they with robots walk he will probably rely much more on your sensor data ok that's that then we can actually look into mapping so and this is what the world looks like in reality and the robot knows its pose so it's kind of mapping example gnosis someone tell the road where it is at different points in time so it would take subserve ations and these observations maybe here that gray stars the road will move through the environment so it will create a map and say hey this feature is here this feature is here it continues driving a feature is here and this feature is here as you may observe here again this estimate about the world stage is not perfect I mean the position of those landmarks are not exactly where those networks are but this is just a result from your noisy sensor data and in the ideal case you know what the uncertainty of your sensor is so you know let's say in 95% of the cases I have an error of up to 10 centimeters from my measured distance in this case you could actually draw some uncertainty ellipses like you would do would be a Gaussian estimate actually draw around your estimate and say the true location of the landmark is very likely to be whatever within a certain bound that's what mapping is about ok in slam again that's a set up what happened in reality the robot started here measured the location of those landmarks these are these two guys they continuous driving again has this turned to the lab can drift to the left they it measures the other landmark so this is snow this over here and since it doesn't know where this let Magazine reality it just adds this land back to its map to its self built map and it cannot do this correction step or only maybe it's some to some degree that the robot in the localization example then it continues driving so here the estimated trajectory of the robot will be this one although in reality the robot took this movement and the grey stars is what the estimated map would look like and this is substantially different from what the real map looks like so there's always an error in our estimates or all realistic situations there's an error in the in the estimate what you may have seen here or guest here is that actually the map estimate depends on the localization accuracy so this map is less accurate than this map over here where and you knew the pose of the robot and if I know where the and so this pulls estimate where I have this correction here so this guy's dragged down here it's more accurate than the poles estimate of this line context so whenever I know what the environment looks like I can better estimate my pose and if I have a good estimate of about my pose I can actually build a better map that's also why this is often referred to as a chicken or egg problem so a map is needed for localizing the robot and a good pose estimate is also needed in order to solve the mapping problem so there's a dependency between what the environment looks like or the model of the environment and how well the robot can localize itself which again influences on how accurate my model there's this this infant or this suggests that this is a kind of a joint estimation task we cannot fully decouple localization from mapping we actually have to solve this at the same point in time that's a reason whereas our main motivation for the slam problem and if you do that then we can actually so we can solve the same problem we have the most important let's say tools at hand or means at hand in order to solve the autumns navigation problem having a robot which I can just specify its goal location and the robot is able to navigate there you know to do that it needs to know what the environment looks like and it needs to know where it isn't the environment in order to reach its goal occasion and so solving this ramp flam problem has a direct impact on the applications that we can actually build with mobile robots and there are typical applications or it's kind of having a working slam system at hand is still something that a lot of robots actually miss and as one of the limiting factors today - let's say dramatically increase the number of robots for service tasks on factory floors so for example reflected floors this problem is typically solved by just screwing the robot to the ground so the robot cannot move then you don't have a localization problem it really don't have a mapping problem and that's a way how kind of industrial robotics solve this problem but if as soon as you have more flexible production lines where let's say manipulators are on wheels and can drive freely in the environment this really becomes an issue and even for other applications so if you have every kind of whatever logistics service task where robot needs to transport goods from A to B this technique is needed you can argue I you can build a map once manually and give it to the robot knowledge of localization it's partially true but still if someone changes something in the environment if new goods arrive and the pass is blocked or what needs to replan how can all this be taking to account so having a working slime system which solves mapping and localization at the same point in time is not something which is just academically interesting like an interesting from an academic standpoint but it's also a direct impact on real-world applications and today there are some systems which can actually solve the same problem quite well sometimes this is compromised between having a good estimation technique and making some assumptions about the environment or require having certain requirements like a certain type of length mark SATA or ensure to be always visible so some constraints if maybe need to be fulfilled on a factory floor and if these constraints are fulfilled the system is guaranteed to not guarantee but it's very likely to work so there compromises between that but slam is still an open an interesting problem so there are different applications that you can envision so for example in your home environment you can think about your your Hoover autonomous Hoover robot you may have whatever a Roomba robot at home which is kind of the one of the cheapest robots you can have for moving your ground and this robot actually has also an interesting strategy that it says I agree ignored this lamp problem I just do random walk I just drive randomly for the environment and as time goes to infinity I will have cleaned that room the advantage is if your room is small this time it doesn't really need infinity doesn't need to approach infinity to actually clean your room but it's kind of the very uninformed very very trivial approach whenever you want to have some systematic cleaning or some guarantees you can provide then you would need to go beyond that gets a little bit more tricky for lawn mowers so items lawn mowing is also something there you can actually know by go to whatever every big Home Depot like store and for whatever less than 2,000 euro you will actually find an autonomous lawnmower they get a little bit better in that sense also that typically don't explicitly build a map but they typically need want to know where their charging station is because at some point in time they need to go back so again here's kind of they typically do a compromise here so you have to install some wires in your garden whenever the robot drives of this wire connected okay I shouldn't go here so therefore you can prevent your robot whatever falling in your pool or whatever it can happen and it also helps orbit maybe drive back to its to its to its base station in forever for example recharge so they are compromised but there's the the the lawn mowing industry right now it's really interested in getting that slam problem solved in a really really robust manner because they wonder what the people need to install this is wire below them their lawn as that allows the robot to walk say to work safely or operate safely so there's real interest in kind of getting rid of that just putting the robot they're switching it on maybe telling them don't go here don't go there but then go ahead and start start working without having whatever an expensive or time-consuming installation which needs to be done and so that's another application the other applications like unmanned aerial vehicles surveillance tasks for example were monitoring tasks after disaster missions or for for example patrolling and estimating the quality of the ground crops crop sensing is also kind of a big agricultural application for autonomous airplanes or autonomous helicopters so different kinds of surveillance tasks actually require to know where the robot actually is the advantages for flying out of vehicles at least if they are a larger scale GPS is a pretty good pose estimation technique so they actually a lot of the problems simplify if you have those means at hand but you may have applications where you don't want to rely on GPS or where GPS is simply not available underwater robotics so reef monitoring tasks this is a pretty well studied not well studied but intensively investigated field in Australia for example there's the australian centre field robotics in Sydney which have a lot of boats a lot of underwater robots that actually inspect reefs that met the ground and they will look for example also estimate how does the reef structure or the ground structure changes over time over the years and so they want to be able to monitor the ground build accurate models of the ground actually relate missions over different years estimating hopefully the same part of the environment then we have underground applications whatever mind applications exploring mines they are other applications which are archaeological interests we want to have accurate 3d reconstructions of some small underground channels past catacombs whatever it is and you also have space applications where you want to map an environment and to exploration missions in in space so these are typically typical examples that you see so what you see here is actually the evolution robotics mint now bought by iRobot which is one of the kind of Swiffer style robot so it kind of a Swiffer towel down here throw it dredge through the environment and systematically cleans your cleans your floor this is one of your lawns lights and underwater robots at the Austrian centre field robotics monitoring the ground you have space exploration missions or that's actually exploration of an abandoned mine in close to Pittsburgh done back Sebastien's run so your typical applications that you find also brought you a small video that's actually one of the showcase we use of the main system just play works the secret is the revolutionary Northstar navigation system simply place the Northstar beacon in the area you want to clean and it works like indoor GPS allowing mint to track where it cleans so it set mint down on the floor and press run first mint covers the open floor area digitally mapping the room is it clean then mint spot cleans around furniture legs or other obstacles and finally move meticulously cleans edges and corners using its advanced perfect edge technology it doesn't rest until your floors are spotless and mint isn't just for dry cleaning it mops - okay so as you could see here these guys solve the slam problem in different waves so there's a small box that you put on your table or somewhere in the environment that projects an infrared pattern to the ceiling and so this is kind of turns out to be the map for the robot or the the means for localizing the robot has upwards facing infrared cameras and observe this pattern and therefore knows where it is so it actually estimates this post relative to that and once you have your post estimate mapping is easy and you see indoor what actually drive around the likes of the chairs and things like that and so that's kind of one way how they solved the actually is quite elegant way the the post estimation problem because there's no expensive set up you just need to put that box somewhere switch it on and it's an invisible pattern net you're receiving but for example I'm not sure how well it works if you have these really old buildings with really high ceiling that's questionable or if you have a robot which goes to different rooms you would have multiple of those different base stations somewhere then it may get a bit more complicated a bit more it's a suboptimal compared to having a system which could solve the same problem completely on its own but of course advantage here for those products is price and those devices are I'm not sure what the current price for this is but last year it was or two years ago it wasn't the order of 400 500 US dollars so number 300 euro so if you have devices at that price it we need to make compromises at some point we have another showcase which actually hear from Freiburg which is the Europa project that's the project we conducted here on urban navigation we had a robot was navigating the environment and that was summer last year when it made it through through from jar starting from here going to five of downtown and navigating autonomously which also required quite some mapping tasks in an orbiter of Vanda shaft in Freiburg sighing vienen from vision shaft and be delighted mostly machine a zip Standish irin fear kilometer long and big endian start fintan does clap material for phones end zone on skin and they'd say my paws a holder young gaben an Elysium nor an uncle must endeavour glider in robot ahead from farm again denim is not finished so why don't we come the I take van de Boer show kun firefight in steel soybean video bata fortune soon it's even - and chicken ok so they also other applications where this technology is used for building system that navigate autonomously through the environment and this for example one of the technology that you could imagine to work on autonomous transportation systems autonomous wheelchairs rather applications where you want to not modify the environment so you don't want to put any beatings in downtown in order to go there you want to have the system which carries also sensors move through the environment estimate where it can go knows it's pose within a map and in this way solve the surround problem which allows it to autonomously navigate so navigation was kind of the key objective here but in order to solve the navigation problem we actually needed to build accurate maps of the environment this is another short example here one of the mapping systems mapping our campus here so that's a close W this is the robot and this is kind of the proximity sensor and which give gives a distance to the closest obstacle so the Roberts driving around here so you may start realizing what you see here this is building 79 this is the men's are building this is where the s pan goes we are sitting in this building over here that's the Dutch crude alley and so you see the robot moving through the environment sometimes you see that map shaking a little bit that's one of the post Corrections that are conducted and so you can see why the robot is moving through the environment you can actually see the structure of those buildings and this information is sufficient for example for robot to localize and navigate on campus here you may observe that the robot hasn't seen everything so they are kind of white spots down here and there are other areas where part of the building structure is missing that's just the reason for that is that this is kind of a passive process so someone drive the robot around the robot records the map but the slam system itself doesn't tell the road where should go to collect the information therefore you have missing information like down here in that area but that's kind of the typical result of social slime algorithms okay so let's dive a little bit deeper into that what is typically what do we have what do we want what we have is we have a sequence of control commands which tells me what are the commands I give to gave to the robot this is what we call you want to UT so UT can be the command like drive one meter forward or stop immediately or turn twenty degree to your left hand side these are commands that are sent to the robot for example why aren't person operating the robot with a joystick or why an autonomous planning system whatever they are controls or what tries to execute these controls and these are specified by u 1 to u T so whenever you see a u in this course it means that the control or sometimes we use or we use actually controls and audiometry information actually the same thing difference is the control command is actually something which is which you physically sent to the robot so like drive a meter forward odometry is something which and when the if the robot counts the revolution of its wheels to estimate how far has it gone and reports us back this is something which we call a dama tree so it's already the first feedback from the system what it did so I may say the robot go one meter forward and the robot drives and stops and only drove 99 centimeters forward odometry may tell me you drove 99 centimeters although the command was 1 meter so it's kind of it's already out so actually it's already using sensor information which is typically some encoder which count the revolution of the wheels and we typically use this as the command because it's more accurate and be more let's get that for free from those platforms but theory it would be sufficient to just take the commands that are sent to the robot the second thing which is very important for us our observations that typically always called Z or Z 1 to T so these are this 1 2 T means at different points in time so if I write it like this once needs once to T so it means observation 1 observation 2 of the vision 3 until observation T so just different observations these observations can be laser scans so a single laser scan which tells me the proximity to the closest obstacle it can be a camera image whatever my robot has in terms of sensing capabilities so these are my observations and what we so this is given we have assumed to have that what we want to do with that we want to estimate a map of the environment so what are the environment looks like and the path of the robot note that the path has typically one index more so from X 0 to T whereas the commands go from 1 to T the reason is if I have a sequence of let's say 3 pulses between those three posts that can only execute two commands from go from 0 to 1 from 1 to 2 and therefore I typically have always one pulls more that I have control commands this please you can have but you typically the reason why we typically doesn't do that is that you use X 0 to find define the center of your referee coordinate system and then you start estimating this one that's kind of leave Y you technically don't do that but in theory you could also get an observation at the first point in time that's absolutely right quite often this X 0 is just used to kind of fix a coordinate frame and then you start the estimation process with X 1 but it's kind of more technical detail or something which has kind of typically used in in the community to do it in that way but in theory oh you're absolutely right you could do that any further question at that point in time ok one important thing to note is these quantities here they are not free of errors they contain errors this can be observation noise this can be completely bad data associations this data is not free of errors therefore we typically use probabilistic approaches to address slam problems same holds for mapping and for localization most of these techniques at least the robots and successful ones most of them use probabilistic techniques so to illustrate what that means using a probabilistic technique you can describe that by the difference between robot is exactly here - the robot is somewhere over here it's simply wrong to assume that you know where the robot is perfectly even say ok robot is here I start on the robot and everything I do I do relative to the reference frame where the robot is right now so the robot starts at 0 0 0 X 0 y 0 orientation 0 but if I say the robot go media forward or what drives the media forward even if I have a extremely accurate motion execution system the robot may not be at 1 meter x equals 1 meet about it 99.99999 centimeters so there's no what I want to illustrate is that there's no way for us to tell that there will exactly executed what we told him we may argue that if the error is extremely small we can actually neglect the uncertainty which is true so this uncertainty so this would be a sketch of a Gaussian distribution if the uncertain he approaches zero and I just get an extremely peak distribution I may be completely happy with ignoring probabilities and just assuming I perfectly know where I am but in most real-world situations this is not the case so we typically have an arrow which is non neglected ball and typically the error even accumulates over time so if I then create go another meter forward another meter forward another meter forward the error actually accumulating it's larger and larger and larger therefore at least at some point in time we need to take into account probabilistic approaches to appropriately model this uncertainty and maybe take this uncertainty into account so if you consider the example of an extremely long corridor is a narrow doorway at the end you start the robot in the middle of the corridor so what drive through the environment it completely ignores its observation because it has the perfect or close to perfect motion estimation system by just counting the revolution of the wheels at some point in time it will make a small error to the right and left-hand side at some point I'm it's super tiny but Etta quarters very long I mean may not pass through the door at the end of the corridor will smash into the wall and if you say but I'm still super super accurate I'll just say I just make the corridor ten times longer 100 times longer 1000 times longer some point in time you will he will actually collapse and the same with these probabilistic techniques and uncertainty it may be the case that your sensor or your motion execution system is so accurate that you can ignore that ignore the uncertainty but if you scale up your problem at some point in time you typically run into problems again and that's a reason why we use these probabilistic approaches for most of the techniques we do a here in robotics are not only for mapping and localization but for most of the real world problems in robotics that we do we use these probabilistic techniques because they're just very very great tools to reason about uncertainty they come at a price because modeling that is to believe very challenging and computationally very expensive and the monotony often has to make simplifying assumptions like the error is distributed according to a Gaussian distribution or something like that but nevertheless it's better than saying there's no error i perfectly know where what the world looks like ok so what does it mean in the probabilistic world we said we want to solve the slam problem we want to estimate the path of the robot and what the environment looks like so we can specify this in that way so we have this P off which is probability distribution then we have this X 0 to T which is the path of the robot through the environment its trajectory it's going to fit the the pose of the robot at discrete points in time and then we also have the map that we want to know about then we have this strange vertical bar over here which means given so we want to estimate the stuff which is here written in the front given that we know what's written here behind that that that vertical line what we have here we have our observations and we have our control commands or our odometry information and the whole thing we're going to do in this course is just how can we actually estimate that probability distribution that's kind of if you if I should present the course in one slide that's actually what we're going to do we're using different techniques to in order to get a hopefully good estimate but where the robot is and what the world looks like and the choice of the technique that we use depends on the assumptions that we can make if we can make assumptions let say we have distinct line marks in the environment we have certain sending probabilities like the sender which has a gaussian arrow there's certain estimation techniques which are well suited to solve that problem if we say we have we don't have light marks we just want to build a dense model of the environment so really represent all the physical surfaces for example the scene we may use something else or if we say we have really really weird sending Tenzing properties like this it's not at all gaussian at some other weird multimodal distribution then other estimation techniques should be used in order to get a good estimate so all these estimation techniques make assumptions and some may be more restrictive in their assumptions but then let's say easier to implement more efficient to execute it's kind of an advantage if I can assume that these assumptions hold for the technique that I'm investigating where I may relax some assumptions this may get computationally more expensive and but that has kind of the choice of the designer we will look into three different paradigms here one is a common filter and family which is made for Gaussian distributions with particle filter based techniques which can have multimodal distributions and we have graph based approaches which often assume gaussians as well but can relax it to a certain degree in better tables outliers they're computationally sometimes more demanding but have some other properties that they are more flexible how they can be used and can combine different sensing methodologies we look into these three different paradigms here and investigate them okay so we said we want to estimate this guy over here so this is something that we also call the full slam problem and if you draw that as a graphical model it looks like this so who of you has seen graphical models okay so bit more than half of that okay so briefly explain what it is so what do you see here you see these circles and you see variable names in this real circles and those variable names are we have seen before so these are observations these are controls these are the positions of the robot this is a map of the environment and these errors M represent dependencies between those variables and you can actually read these errors as influences that's the easiest interpretation say an error means it influences so for example if I'm interested in estimating the current pose XT of the robot I look to the variable XT minus 1 that is the position at the previous point in time and there's an error from XT minus 1 to XT and I can read that as XT minus 1 influences XT which makes absolutely sense if I know where I am right now or I have a pretty good estimate where I will be at the next point in time given that account out whatever travel at light speed somewhere else so read those arrows as influences we can say for example here we have the previous pose and the odometry command execute executed both influence the new position so if I know where it was and know which command are executed I have a pretty good means to estimate where I am so give my I've started here and say go one meter forward one meter forward some somewhere around this with a quite high likelihood I'm not perfectly there but I'm somewhere nearby and this can be expressed by this XT minus 1 error 2 XT and an arrow from UT minus 1 to UT if you look to the observations what we're going to observe depends so for example the landmark or the the position of the whiteboard here it depends what the world looks like so where it's a whiteboard in space and it depends on where I am if I'm here and observe the whiteboard we would say okay it's a meter to my right hand side if I'm standing over here will be 2 meters to my right-hand side so it first depends where's the whiteboard in the environment and where is the sensor and if I have these two information I can probably get a good estimate of what I'm going to measure and this please yeah or what the environment the environment you can even see that at the environment you can also see if you see this as landmarks let's say distinct landmarks is where are those landmarks in the world so it so it depends on yes it's a good question so the thing is the important thing is what the world looks like but given that I build a map this map should tell me something what the world looks like so the the estimate so the my estimated map also tells me something about what I'm going to observe but in general you're right what I'm interested in is the environment itself ok and we have those variables which are here which have this this dark grey background over here that means these are the variables we want to estimate so these are our unknowns and the other variables these grey variables here are our knowns so these are the unknown quantities and these are the observed quantities and these graphical models so we won't use them very often this course they are kind of a nice way for illustrating dependencies and also assumptions independence assumptions that we make because if you have an arrow from here to here it means if you want to estimate X the XT we should take into account on those variables which have an arrow pointing to that variable because it means they are influenced by so XT is influenced by XT minus one so if I know XT minus 1 I get a better estimate of XT and if I kind of drop an arrow somewhere can quite nicely illustrate modeling assumptions with that course I can assume it's just a model I create I can assume that are the current poles of the robot doesn't depend on the previous post that's just my model of the world if I make this assumptions assumption it's quite likely that my estimate gets worse but I can do that and these graphical models are kind of nice to illustrate those independence assumptions and this is what we call full slam so when I estimate the full trajectory of the robot and we want to estimate what the environment looks like sometimes one is not interested in full slam we're estimate the full path of the robot but I'm interested what we call online slam I just want to estimate the current pose of the robot and the map build up to the current point in time that's what's written he has a small T so it's not necessarily the whole sequence or small T is within 0 and and large team so I want to estimate we have the robot currently and what are the environment currently looked like give them all my previous sensor data that's actually something that most robots will use if they want to make decisions based on where they are where they want to go if I'm not interested we have been in the past given up a map of the environment I just want to know where I am right now so for most real-world applications this online slam is actually the interesting property I don't want to collect all the sensor information beforehand and then build a map and estimate where the robot was I want to do it online right now ok if you write that as a graphical model if exactly the same structure over here but we typically only have the great aerial variables so those variables that you want to estimate is only these the last post and the previous ones are not interesting for me anymore from mathematical point of view this means we need to integrate out all the previous pulses so the only interested in estimating the position at time T and the map of the environment this is actually an integral over all the possible locations at x0 all the possible rotation x1 x2 x3 until XT minus 1 of the full slam posterior right if I integrate out the variables I kind of removed them from my estimate so this is a full slam posterior here's X 0 to X T and if I integrate out X 0 to X C minus 1 I actually get the probability distribution which is the the variable XT just occurred point time this holds for all variables so if I have a probability a joint probability distribution P a and B so just to whatever quantities of an estimate I can actually say P of a is the integral over all possible B's ok or let's say let's write it informally I integrate over all B's of P of a and B DB and so what remains if I integrate about all possible outcomes that B can take I end up having the probability distribution body this is exactly the same thing I do here so there's no like magic in there it's just the integral of all those positions and how these integrals are soft is kind of recursively one at a time so if I go from XT if I go from X 0 to X 1 I do that estimator step and get rid of X 0 and then next point in time I move the word move from X 1 to X 2 it multiply X 1 to X 2 and then integrates rx 111 in successive iterative process while the robot move to the environment so at every point in time one of those integrals is solved ok so that's kind of our graphical model for online slime and the corresponding distribution down here and different techniques we will see in this course offer online slam problem or the full slam problem all very end of that and just depends on what my problem with which M technique I'm going to apply to solve that okay so one motivation just to say why is slime actually a difficult problem the first thing in reality is what we have is that the the estimate of the past posterior in the estimate of the met posterior they actually correlated they are they're both unknown they're both correlated so we don't know the past we don't know what the environment looks like want to estimate both and at the same time those estimates are correlated so they depend on each other ok we have an example here so that's a robot drive through the environment measures those two sorts to networks moves on so in this the circle should represent actually the the uncertainty it has about the position of those landmarks so it doesn't know anything this is other the true locations of those landmarks and so it estimates these guys over here this guy sits over there and then the robot moves on while it moves on it ends on being here but it doesn't really know where it is so it hasn't kind of a mean estimate in an uncertainty for example in here so you can see that here the uncertainty was basically zero just started off the robot it said ok I start at zero zero zero let's send off my reference frame here I go perfectly nowhere M now I go five meters forward and it on really no if I went five meters forward maybe for me 250 maybe 5v 250 maybe a little bit to the right or to the left hand side and therefore I have this uncertainty over here so motion increases the uncertainty of the system right okay okay then again the robot measures these two landmarks and you can see here also we used exactly the same sensor reading same sensor and so the uncertainty is around this line mark and this landmark should actually be the same than this line mark over here but the the reason why they are bigger is that I need to combine the uncertainty the robot has where did the observation and the motions are the observation uncertainty and therefore these ellipses are bigger because I still have some uncertainty because I didn't really know where I was plus the uncertainty of my sensor reading and the robot again the robot continues moves forward and now happens one thing it actually observes reabsorbs the landmark and by reabsorbing a landmark and say okay I get already a better estimate for M because now they it has an expectation of what it should measure given the model you can say okay I measure something else so I actually can also correct the map a little bit and this is shown by this small black circles over here so this is the M this is the uncertainty ellipse after the the next update step the renderable updates its map because has reabsorbed something so it gains more information they can see also it was just observing this landmark over here and if it gets more certain about the location of this landmark over here it will automatically get more certain about where it has been before because if I know the position of this guy better that will also reduce the uncertainty of this estimate because I made an observation from here to that landmark beforehand and so all the map estimate the position of the landmarks in this case and the estimate about the poses that the robot has been they are actually correlated so if I gain information about one part this impacts all the other and so given that there is a dependency that I cannot estimate them separately I cannot say I only estimate the posters and then solve the mapping problem that's something which typically doesn't work because you neglect the dependencies in here the second thing why flam is a different difficult problem is data Association so if I the sensor observes something I need to actually relate what I see it what I've seen before so for example if I go back to this example exactly in this situation so next one here the wrote now makes a new perception and this okay what I've seen here is exactly the letter saying that mark I've seen before this situation may be easy because they're not many landmarks but assume that here are 10 landmarks really close close together nearby it's quite likely that you kind of flipped the Association depending on how distinct those landmarks are these are all identically looking poles or trees making data Association errors is quite likely but if they are let's say have a very good feature description that may not happen that often but in practice it happens from time to time that robot confuses two places or two landmarks and the other thing is that also the uncertainty of the towles estimate impacts the data Association so let's say this is my uncertainty at the moment and I get an observation which is kind of I see these two features these green dots so I can either be here sitting here on the left or I can be sitting on the right and in okay this situation is especially constructed to illustrate that both have exactly the same likelihood so just because I have motion uncertainty or uncertainty in my current pulse estimate that makes a decision hotter and the other thing is that picking the wrong daily Association here actually leads to a wrong estimate because if I say ok this is the right data Association so I know I can't be here so this uncertainty really gets smaller and the mean will shift towards this this position if on the other hand on the other hand say this was the right thing then the position of the robot moves more to the right hand side over here and so then I get this observation if I assume that this observation is right and so in this situation this would be the uncertainty of the robot probably and in this situation that would be the uncertainty of the robot and so the uncertainty gets smaller and if I did the wrong Association I get a wrong estimate of my uncertainty and that's something I don't want of course I can't get it to really get overly confident of where I am which is and this something which is not true so I get an inconsistent estimate that's kind of reasons why slam is a non-trivial problem I have this dependencies between those variables and also this data cessation errors is there two things why it is still a tricky problem at that point in time are there any questions ok so then we would like to go quickly through a typical taxonomy of slam Ivor's and so how can you describe different slam Agra's and what makes it indistinct the first thing is on kind of what kind of Maps do they estimate we have one class of approaches which generates volumetric representations so like for example if you see this over here that's actually building 52 and 51 on our campus built with Robert was standing here having a pencil 3d a pentile laser scanner and building a map of the environment so I get really this these physical structure of all the surfaces of all the points that I can observe or um there's also something you will see quite often this course this is typical 2d map white means free space black means occupied space and gray means I don't know or wood has never seen that part so this is like an architectural floor plan but just adding all the furniture and all the stuff which lies around in there in the apartment or in the building and so this is also a volumetric representation in 2d so you have the free space over here any of the the black points which are obstacle locations in contrast of that you may have feature based approaches which do not attempt to model all the objects in the environment but they just represent to represent distinct landmarks this example here this is the Victoria Park a University of Sydney in Australia and it's one of the kind of a very famous data said they recorded a lot of time ago and they had trees over here and they had a laser scanner and was just estimating circular things it was a trying seem to be the trunk of the tree and every tree trunk created feature so just drive around just want to estimate the position of those features if you know where all the trees are you can even localize the road with this feature based approach the advantage of this feature based approach is they'd still be a very compact representation I just need to store the position of those trees it's kind of really nice from a Robotics perspective on the other hand it requires to have means to assay this feature I see here is exactly this tree there which is often hard to do for depending on your feature depending on your sensor may be hard to do and these are kind of the disadvantages of those feature based approaches there's always a pro nakhon for both sides if you are happy on localising you just want to if you have some distinct landmarks feature base is a nice approach if you don't know what you're going to expect or you really want all the obstacles in the environment because you want to navigate with that need to obstacle avoidance let's approach may be better depends on your application on your estimation technique the thing that people often differentiate are topological approaches versus geometric approaches one situation you try to estimate really the geometry so this is kind of a typical map you can you can you can buy or get somewhere so these are the streets that's not what Google Maps is kind of the old-style maps look like that both of you haven't never seen them so these were the streets either the buildings so this is a typical geometrical representation of the environment what you see here is a subway map and of Paris I guess yeah first and what you see here only are the connections between places and even the position of those places maybe not exactly at their geometric location so the distance between those two places may be dramatically different than the distance between those two places this justice describes you how argue I get from A to B using the subway and for solving subway navigation that's much more convenient than representing the subway in a geometrical way because you don't care where the subway train goes exactly just one know if I bought whatever train number eight I get to that station so depending on the application you're trying to solve topological representations may be better than geometric representations as well as the other way around again topological maps are very compact you can describe them in a nice graph you can use standard planning algorithms like a star which operate on graphs so using these topological representations can be advantages nevertheless most approaches use geometric approaches so topological approaches were pretty popular ten years ago something like that there are some people who try to again re-establish topological approaches as some nice properties they have some nice properties I want to say the difference here something else here but for most robotic applications geometric models are the models that are choosing it that people use other things is known vs. unknown correspondences so that's a slam algorithm that I develop assume to have perfect data Association there was the example we had before so if I get this observation do I know that is where these two landmarks or things simply no idea some some approaches make the Explorer most approaches actually assume to have perfect data Association we see a feature you know which feature it is that's unrealistic but it's pretty hard to track all possible data sensations there's a number of possible data set data Association explodes quite quickly they are however approach to say ok I assume to have a certain number of outliers or wrong data associations in in my incoming data and I try to deal with that so it's just something in differentiate the earthing is what you often find is static versus dynamic environments it's the environment not changing so it's the same so this our robot driving over campus no one around I mean the plants are growing over time but we can say that's a very very small change I ignore that but if you're driving with people in your surrounding and the scene looks like this that's highly dynamic things change it's partially hard to recognize that you're the same place just be there are so many dynamic obstacles around so there are approaches which explicitly say we want to get rid of the static world assumption and really want to allow to model dynamics or changing worlds and that's a much more challenging problem than making the static world assumption which most approaches to what can also differentiate approaches is the ability to represent uncertainty sometimes approaches at least the way they are used assumed to have small uncertainty in the post estimate so cannot handle dramatic errors of the pose estimate so the root says okay I'm quite likely to be somewhere over here versus I can be here somewhere over here or here or here or here or here or here here just a number of different hypotheses the system can maintain or the way the uncertainty is represented for example if I have a Gaussian distribution I can simply increase my variance and I get a bigger uncertainty without making the representation itself more complex if you use other ways for representing uncertainty like sample based representations you may need much more samples to actually represent uncertainty explicit better so that's kind of a difference what also distinguished approaches is active versus passive slam so we will look mainly into passive slam here that means we assume the robot is controlled by someone else by some person by some algorithm we don't care we just say here the incoming data stream and we work on that incoming data stream let's standard slam which is passive there are other approaches towards active slam but the robot needs to should make its own decisions of where to go in order to build a good map of the environment that then kind of involves also local exploration or go to explore the environment or should decide where to go to get better maps let's also sometimes referred to as active slam we will mainly look at the passive slam so you can assume that someone dies or over the wrong with the joystick the next thing we see is sometimes M any time or any space slam algorithm that means I just have a certain amount of time available let's say whatever 20 milliseconds per observation that I'm allowed to spend on my estimation system I don't have more resources do the best you can with this 20 milliseconds or so I'm saying okay the robot has only one kilobyte of RAM which it can donate to its map estimate but so this is not sufficient to represent the whole environment what should the robot forget and what should the robot focus on and if I increase this memory how does it change so these are kind of any space algorithms whatever constraints I gave so the more memory I have the better the more time I have the better but what can I do with a certain amount of computational resources or certain amount of memory that's also something you often find and then you have kind of single robot systems versus multi robot systems so you have a single robot which does fly them wherever multiple robots because if you have multiple robots they need to also make data associations among the platform's they can be more tricky but with more modern approaches with this graph based approaches this problem is actually not that dramatic anymore so it can be integrated nicely into the same framework and doesn't require some extra handling as it was the case before so you don't find this distinction single robot multi robot slammed that often anymore as it was done in let's say five to ten years ago okay so we have a large number of different properties of those slam algorithm of those slam systems or requirements that we need to address and there are a large number of different algorithms that we can use to solve the same problem they all have certain advantages in certain settings under certain assumption this is the optimal approach on other other assumptions this is the best approximation that I can do where I can compute in a certain amount of time so this large number of approaches out there and still a large number of solutions are proposed these days so if you go to a center robotics conference you find multiple tracks actually associated to slime problems or slime related problems and there's also a big interest from commercial users these days which try to build robots or robotic devices devices using robotics technology to model the environment to estimate the other device actually is most of these approaches use probabilistic techniques they are very very few which ignore that in there typically it took very efficient but not met break in performance and therefore there's no way around probabilistic techniques in context of land problems these days this story in slam at least in robotics goes back to let's say the mid 80s when people started to investigate that but it should be noted that there are other fields where similar problems have been addressed years ago so if you look to geodesy people are measuring the earth measuring the size of countries where light marks they faced very similar problems whatever the end of the eighteen hundred's century something like this you have very interesting problems which arose years ago and people used message which are still very related to what we are doing here today we may have different requirements with robots and let's say some problems that they have these days so cavitation time was breathing issue 200-300 years ago everything had to be done by hand so that other problems like calculation errors which became kind of ignore today and some of the focus was maybe let some somewhat different but still they use or address very similar problems and if we go back at least to a slam history in robotics so it started in 85 86 with miss and his team as well as your own right started to describe or use concepts to describe the uncertainty in geometric relations between observations between landmarks and then they have been discovered that Accra which is the biggest robotics conference so definitely Conference on robotics and automation on how to sell this land problem and then followed by one of the kind of seminal or key papers by Smith south and Cheeseman in the beginning of the 90s the common filter based approaches became popular this one estimation technique developed in around 1950 the Kalman filter which is frequently used in robotics and then actually the acronym slam was gone and is art in a workshop ideas are 95 you sometimes timed also CML for concurrent mapping and localization but slam simply sounds better than CML and therefore people stick to the slam and then the very first concert convergence proofs in the late 90s so that they can show under certain conditions the system will actually converge to the right solution under certain assumptions and the first reload system that we're operating with those techniques and 2000 on wide interest started sore knee approaches were published going beyond common filters using particle filter based approaches and also the graph basic approach became more popular so in this course we will look into the three main paradigms Kalman filter approaches particle based approaches and graph based approaches and in order to do that we can have a look to our graphical model and all those approaches use two concepts the first one is a a mock so-called motion model and the second thing is the so-called observation model the motion model tells me how that the robot move and the observation models how should I interpret my observation and the motion model is actually this part over here in the graphical model so how do i estimate XT given I know XT minus 1 and UT so this this estimation process over here is what we call the motion model so how do I get this guy given this one in this one and the observation model it says what's the likelihood of an observation given I know where the robot is and given I know what the what the world looks like so motion model tells me what the distribution about the new pose of the robot given I know the old poor-poor pose sorry and the control of the Adamo tree command so given I know I was here given us a go meter forward what the probability distribution of where I end up that's the motion model so if you have a Gaussian motion model this may be initial distribution if you move forward let's say go media forward you end up maybe with this distribution if you have a non Gaussian model that means you don't have a Gaussian distribution from other means of distribution so what starts here goes here and then this may be what so called a banana shaped distribution so it looks like kind of a banana and the the the angular so this in this kind of banana shape actually results from the orientation that the robot has and the nonlinear dependency between the state and the and the angular component or if you put in a sample based representation that's also a form of banana-shaped distribution we have a standard odometry model that means the robot is at one point X Y theta and should end up at X prime Y prime Sita Prime how can we go there we can express every motion in the plane by one rotation the translation and the second rotation can describe a transformation between two poses easily so if this is the current pole so this is the heading of the robotic first rotate here then move straight here and then rotate here so with three components I can express this now these are the equations you can do that so that's just kind of the Euclidean distance between those two poses this is the arca stands for this transformation and then just a subtraction of the angular components that was something that we looked into in more detail in the robotics one course how to express this and so there's we are recording from from this year that's chapter number six we can look that up again we French that while you look into the probably see robotics book in Chapter five which tells you something about the motion model so if you're unaware of that I recommend you to take the book read that chapter it's not that difficult to understand and kind of refresh that we will I will talk a little bit about that next week as well give you again a reminder but it may be go to have read that the second thing is the observation model this is kind of distribution about what I'm going to observe given I know the pose or not the state also given though the map which is maybe neglect here which tells you a kind of how do I relay so what I'm going to expect to measure given I know what the boat looks like so in the Gaussian world let's say I'm here I know that the length mark is here so this is my probability distribution that I'm going to expect so if I'm standing here I'm facing the wall I measure the distance to the wall I measure something like whatever 5 meters plus minus a little bit or if ever non-gaussian models you may get other the beginning rather weird-looking distribution so that's for laser range scanner modeling certain aspects such as dynamics above the scene again chapter number seven in the robotics course or chapter number six in the probably seek robotics school book skim over it refresh your mind again I will talk about that next week as well but just for you - let's say pre pure a little bit for next week that's something I would recommend okay so time to sum up is that mapping is a task of modeling the environment localization is a task of estimating the pose of the robot in the environment in insulin we want to do both things at the same point in time so simultaneous localization and mapping that what the name actually tells us we've seen first full slam opposed to online slam so you should understood today what are the structural differences between both once I'm estimating the whole trajectory once only the last post or the current post and also kind of roughly introduced a taxonomy of the slam problems like what are typical things you distinguish between that I try to do a last slide for every course here and giving you a kind of literature way you can actually read really that these topics or read it again where you find more details so the overview you find the handbook on robotics and step turn simultaneous localization and mapping subsections 1 & 2 other things we covered here today so you can can read that there and on motion and observation models I just had one slide for that and all that's very short it's a probabilistic robotics chapter 5 & 6 & 6 & 7 you find that of 5 & 6 in the progressive robotics book and 6 & 7 in the rope in the robotics course where there also video recordings from last year so if you want to know more about that read through that although I will talk about these models in the context of the base for the next week as well that's it from my side are there any questions from your side
|
SLAM_Course_2013
|
SLAM_Course_14_Least_Squares_Cyrill_Stachniss.txt
|
okay said welcome back to the course on formatting so far we looked into cover the device approaches in the beginning of the course and then also looked into a particle database approaches and today we'll start video tonic you fragrant and this is one of the most popular frameworks that we find today in the context of civilization a mapping problem it has well it's actually known to it's quite a while but only became popular over the last five six years both wailing beforehand but mathematical tools that were available they often too small enough to use it for real-world problems but this has changed and today there's pretty systems available that we can use in order to address this problem and what I'm looking into today is kind of these squares technique this is kind of the basic methodology so error minimization e square error minimization oh these squares or square error minimization and the goal of us to find configurations that minimize some approaches so usually you know this you're ready you can prove that the color three that we did with the particles that we did and get these numbers graph based approaches these pathways approaches will be introduced today in this kind of the last big building block of estimation techniques we look into here in the context of the same problem barians of this but it's kind of a lost big building more for this course so copies slip and in most cases for most waste you drop a slab problem is its least square or slab we tried to minimize merrily this is the error between what we expect to measure and what we measure very similar to what we did with the cover and try to find recursively estimate our state of our system given the observations giving you controls and we try to find those states which minimize or which our best explained in the Provost extensive because actually show later on that this is strongly related to minimizing the error and mismatch between whatever measure what is there so today we will not look at this man this year is not a very general introduction to these squares because this is a very very common technique it's available since few of years and is used in basically every discipline Cantonese say every discipline but every engineering were cypher discipline for biology for links to any engineering links at some point time people are faced with error minimization or one of those standards so the idea is to compute a solution for problem that is over a term that means we have more equations than unknowns so we have some information that we want to use to solve with equation or system of equations and agility we have more equations than unknowns more information than the other variables was what problem do we typically have in real-world situations when we are faced with such so what happens if you consider a few years back the system of equations of linear equations and you try to determine the values or the unknowns in this system and you system it a more equations than unknowns exactly typically whenever you Timaeus is there when everything was not perfect you will end up in situations where simply two equations disagree once they devalue of exodus whatever one the other since it's one point one so you may argue this is cause final measurement law is playing consistency or the chaos if you want and the approach we addressing here is trying to find the solution the solution solution that minimized the squared error for some of the air above equations so just check what the belly which best explains all my twins I simply account for the fact that I can't make all of them fit there's noise we gain this we've been on living the perfect world there are some of the goals to minimize this error this is standard approach which holds for a large set of problems so every regression problem you treat to be clean see you have a number of data points you ask me a line straight line which both of these data points the treaty do is determine the parameters of line parameters X B for a B and so they the squared error which is kind of the difference between the data points and the line is movements and the message has been developed in 1795 by hospitals actually was 18 years old and he didn't even publish this work we just developed it told you people about that but didn't publish with you first until six years later someone used his technique to actually predict the future rotation of newly discovered asteroid was 1801 the mess the guy who used the ideas of gals to do that was the only one that was able to realignment predict where the asteroid will be in the future because hasn't behind the Sun so it was going and first we used this technique was the only one of those people to actually predict where the syrup would be and this was kind of the first big showcase and then actually other people published his Mesonet Newton Newton himself but he was actually the first one were to be credited to him that he was the first one who invented this mess up there other publications at the beginning of 800 800 three eight five nine using proposing this technique a very similar technique we read it was Scouts one person we actually developed inside years okay so let's go a little more into the details actually tailored a little bit to the problems we addressing here that we have observations and we want to estimate the state of the system so then the observations we get the data we get matches with our models water to predict so giving the system of observation actions it's called an F I they met the state X into a predicted observation so it's very very similar to what we did it become so given the state what are we expected to observe what's the predicted observation given that the robot is at a certain position and the landmarks are the positions this is a state vector X I can actually compute what I expect to observe if the world looks wide what I have is my X vector the statement read and then I also have a real measurement this is CI there's a remember that obtained and a predicted measurement would be and then under the assumption that the observations we obtain so these oscillations here are noisy observations we want to compute the state X star which best explains that we have any different observations we have potentially any different observation functions if these are all the same type of observations this function f I am it is the same for all stations but if I make different centers or whatever different properties that when I modeled these functions can be separate for the usage of observations and then as an input I have this parameter X and this influences the predicted observation and I have my wheel observation the goal is to actually find the configurations that the error between the predicted one and the real teen one is minimized when look here into the spectrum so that's is it an explanation we have this variable experts on state which is unknown and this influences these predicted measurements there's the use of function f 1 v and u which uses a state and confuse the predicted observations of it there's we'll be happy and we have the real measurements and we modified an X so that difference between those values is as small as possible and we're looking here is the square difference this cost for this square but it's one more complete example let X be the position of 3d teachers in the department that becomes served in the camera and let CI be the coordinates of the 3d features for represented on the camera camera images kind of it protects you from 3d to 2d does what we met who pitched up these features in my image and then the goal to be estimate the 3d positions of those features in space so that the projection error is minimized so that the projection the features generate are the projections that I actually measure or as close as possible to that we just have one Sigma camera image and you do that if you find us it's a fast engine what does it mean Henry Akeley estimated of three locations of the features this particular cell so yeah 3d coordinates if one single camera button to observe sauce features to get the perfect so in the camera in the image plane you're the projection from the 3d space onto the 2d space and you know one estimate follows approach you want an estimate the positions of those teachers so that the distance between the projectors point you know and all that point is they are projected on the image and what I measure where is is minimized say a final address to all these errors of zero which is the perfect case what does it really do I have correctly estimate location of those features all their information addicted so we're the ability to really have them you don't know if the object for example is one meter away or it is 2 meters away light in the same way so look we're all right it's going great from the protection Center it would be English point 12 teachers the point can be somewhere on that line will produce exactly the same recurvature at the same prediction on the image and therefore this is if you just want to see the camera this is not a solution for the two cameras like a stereo setup we can actually estimate also to death by you information for both cameras this dust a small example - employ these these technique we are going to present him here in order to - Stacy to estimate configuration that best explains marina any questions about these more general consulate so the key thing really is we have these apps we had our real observations Z based on F and the same any stage we can compute the predicted observations and some kind of you want to choose this X so that the squared error between those values series minimize that's kind of a lot in order to do that we need to define the error function what would be the error functions for this variety form of the first equation and one x equals 2 hat and we have that we ever between them what's the easiest era friendship exactly the difference between them to lose interest in the square tower and squarely okay and that's the way we do that so the error E like it's written in bold this means that it's a vector or matrix there's not both then it is a scalar of us to the senior sis because they're different st. barely II used twice one over on both but should be pretty please so we can either suspect or such difference between the other may be obtained and predicted observation and we assumed from sumption here first of all the observations are independent of each other and this has a zero mean and it's normally distributed and then we assume that the error has influenced many Omega also very similar one for and then we define square error which takes to account this uncertainty actually to infinity as B so this is ball teed one here is the difference between states times the information so if we send information matrix to the identity if the standard least-squares combination we just use the square the earth we kind of kind of use the information matrix - that's a scale the error depending on the house surgery offer example a lot of measurements so if you're a more suitable amendment to account more in terms of errors compared to the measurement there it's very uncertain vaulted the sensor itself this one we call our own is a span error this is a vigil our case so this is PR the one single observation therefore the next ayah and it gives most of them did we get enough so we can define the product as finding the minimum under the minimum of the function f BF but we have some over the visitors these are scalar values we're interested in to the state X star which is the best States or the one which minimizes the error we can select as in this form where we have here the vector R vector form X with the vector tools and information is all whatever is create individual square reference so let us see you make 16 assumption that the observations are independent and then this is just replacement by definition by saying this so now our goal is that I'm the next star if I have the next or actually have a solution to my problem if you know transporter tumour slam X in this case would be location of the robot and the max of the location of me features for example and then I want to kind of change my parameters my eggs configuration in a way so that the configuration I have is as best as possible line with what I call sir any questions about that it's very silent with a very good sign it's too early for you directly out to behold a day's work this information matrix isn't what these initiatives does is it represents the certainty we have about our sense of the more certain we are the vigor of the values in there and if you consider how easy the Gaussian I also mobile distribution we have something that some prefactor and then you have X oh that's half X minus the mean plus post Omega thanks right this is how we use it in the LC case we saw this does nothing else it's just scaling so it's usually exactly the same way to just scales the arrow saying about this element we are more certain than about another term for this dimension of this error term we are most return we are more certain about this number so it may be that this information matrix measures some part of the state space very accurately and some part doesn't want any information then the values in this information matrix will differ for example along the diagonal most of the other counties if they always say this just escaped so we have wanted one Center where this value system the energy and one where it's two times the energy just means that the center where the two times the identity is more provides the same information was just more accurate so kind of the same relation of the poorest of the individual dimensions so what the other vision know is yep we took to account here or nothing previously the authorization was all in you because you have individual term for every measurement that for being excited here so it can be an individual information matrix for every sensor measurement so privately sends over or depending on the current accuracy of the sensor may be in it you'll serve something where the synthesis very high certainty way of the situations where the sender is very noisy and you can also use the same sensor you can have that in the same paper so what's our goal is that our goal is to fight this it stopped over here and should be minimized function - riveted set it to zero for the finals the problem is that dysfunction may be arbitrary again so it may be quite difficult especially not only in your case how many attractions in there to find all the mouse or you have say sine cosine functions may be multiple not all these ugly situations that may come up so in general is very complex to compute this term and there is no hospital yet any clothes for a new we can't find numerical approaches which is our to propose to try to get better than important I mean Sammy also always small steps on my function on the function I'm going to minimize toward the minimum and they try to approach the middle so we need approach one way to address that looked all through the narrow humps we assume a sort of a trade and then try to approach towards that mean who there's something very similar so in general there's no possible solution so what we need to do we use approximation is an iterative approach to come up with the solution and what we typically do is make some assumptions and one of the assumptions that used that a good initial guesses available so we roughly know these variables so especially needed again with an arrow function which has local minima mainly like this for example we're going to find the middle Soviet the global any moment this space would be here and if I just say say follow the gradient of this function is my first approach and my stop values somewhere over here the true minimum I can afford this gradient so the assumption is that this initial guess there's somewhere not too far away from this from the true solution this we guess somebody maybe it's a continual splitting assumption but for our problems with next means formula initial guess as you say people estimate roughly very roughly all the seniors life we may use our boundary information to easily computer solution start with this as the initial guess this context is like another favorite problem just come up with an initial guess on your own in order to to approach the correct solution yet it may be that you so depending on the shape of this error function maybe they don't mean any initial guess of it just gets doesn't matter at all and you're in a very lucky situation but in most realistic situation versus nominees and the other thing we we assume in here is that the error pump is kind of smooth in the local neighborhood of the minimum which is hopefully the global minimum and we are back to the usual guest but that is kind of has a well-defined radiating some kind so completely idea function if we do this then you can apply the false witch's of called iterative local linearization is called Gauss Newton approach you're going to present you this is one way of approaching the solution it works as false what we do is this we have a potentially nonlinear function error function the first thing we do is we linearize it similar to that the heck so to say that the colors of the did the extended come a little bit if you encounter a nonlinear function to do not know handle the counterfeiter the easiest thing you can do is just linearize that function that's exactly what we do so we linearize our error functions so here the vector eyes around the solution for the difference we need to know where we are possible that the reason why the initial guess because they need to linearize and the linearization point and that's where our musical guest this important and then before trial standard approach figure with the first derivative of the square error five concept if actually one of the device then we accept this first derivative to zero as a result of that we obtain system of linear equations the system of equations because this multi-dimensional problems a lot of observations and then we solve this linear system of base and solution of the linear system i can obtain you state estimate the new x this is hopefully closer to the global minimum if the assumptions I made before are not justified completely we read our function completely far off this main interface may approach another meaning or I may even be far away from the moon if my error function has a very very weird this is not as soon as fun and then we blossom reiterate this procedure wine ready to the direct this procedure why am I yeah let me guess you two considered let's see back to the end of this course at home we had at school if you're a testament to me could be the memoir thank you you lose the first derivative you said it to zero and this you said indication of an all-star why is why doesn't mother in this case why do we have to eat a rapist will seem to repeat over and over again but so the breezes be linearized but we only use an approximation this approximation depends on our current state on the initial guess and so we kind of that's a neogeo function we linearize ER come from somewhere someone over here and a follow this towards use the squared error that photo causes gradient that's all find if i have an error front which is completely weird intervener somewhere depending on where I am in a completely different way and so this part is kind of smooth then the the dooblydoo the linear a linear expansion obtain this one is different from the linearized on type in this point therefore and to make several steps in a transpose changing each day each other yes so the history of not so will be used in theory initial guesses given to us and we always use a previous image of the first need to be half so we need to be seen can see the initial guesses the fruits which will stop with that you always use cross so the results of the next stage and then use as the linearization for the next iteration okay any questions about that okay so let's start with all these individual points and see how we can actually use this to happen so the first thing we resolve this is the linearization so we have to linearize our function by refraction and kind of leave the both wants of a vectorized form whatever I do ever to split this up into two terms values X the distance and Delta X and then X is a linearization point and then this the approximation of this compass I cannot vary Delta X because this would indicate the Celtics would be too big to be small and the imbalance is small the approximation the linearization I do is hopefully very similar to the original model so say what this is is equal to e plus draconian computed in the state x times Delta X this is a standard way we use Jacobian in a very SFPUC be compensated for linearizing our observation times more for linearizing our virtual function so just as a reminder of Jacobian Jacobian we've seen is the first derivative of modern original function so we have and it's defined in this form so this and I at one over here on top this is the first dimension of the function f second dimension of the management of the M dimensional geometry and this is x1 x2 and so what we need to do in order to perform this is we need to be able to compute the Jacobi the patient we've had this linearization we can also say it's ex saw you forget our solution and then carry out the minimization in Delta X so Delta X is a variable be looking into it's gonna be the change in the state of which what should be the change in our state delta x which minimizes the overall evolution so interested so this is going to fix it to the third initial guess how changing be stuffing so how do we do need not already to determine Delta X so that it minimizes be our the global reality right okay so how does this and that took you something because yes both some variations and slower blackboard but if you've ever so this is not tailored yet forward slant standard this fair approach and explains the usual steps so what we have we have our arrow front-running perspective indicated you high as nectar all cost 8 X 12 e Pineridge wait everything yo special paint - thanks the observed observation minus the predicted observation given the state and then I had this one bringing little trips we had the eye and funky we want to minimize was function they ever S which is given as the sum of the individual okay so the first thing we did was that minimize this country was to bigger eyes so we have written the eye of X plus del thanks it's just with if it makes me on the last slide this was what unfortunate that or Ericsson in the XVI of X Wow petroleum I don't ya of course depends on eggs just call this so there's a portal of this depends on X and X levels on the X otherwise we'll get nuts with all business okay so this one interesting right now let's see where this leads us this restaurant the individual error to the spare our patent system over here so we have to take this term and replace the e is with the impersonator so it's now approximately was au s-- linearization GI of x plus ji x transpose matrix II i J I know that right okay well if you know who and see the split us up into individual between individual terms I of X transpose deformation matrix X Mercer what else do I get let's make it more direct all the individual I'm supposed to be there's a little different cell types let's post this I suppose have to change the order okay May 26 that's okay and always thought it's nomination so small big sickening we do okay so now it's a way to turn this room stays the same the I suppose just taught the ex they gave you this well the first order and now I seek to combine those two throws over here with second and of course the first one they can take say hey wait it's cool xt j omega j x so then happy to other services that we provide there so the function here is a scalar this gamer that means all the individual terms are scalars as though this is someone's dedos it means you two turrets are scalar values as well so can just take one stay this one and trust please the transpose to skate out I don't change these values right let's post this function over here I end up having executive function so this e becomes e transpose goes towards the information matrix becomes information into X transpose which is the same that the information matrix and then these terms they transpose them they get swapped so this is same two times this true e x transpose so is this step clear I've had to explain it again it's not clear why he combined those two times as in simply make something finish with smell so we are sick of writing all those terms that simplify our life a little bit that's ball behind this one has CI I call it the eye I can't see because this term here is a constant in delta T definitely doesn't doesn't look here okay so then I have this term over here and quite fine - this sh5 dusty Cogan first post deformation a fixed index and the Jacobian let's call this a matrix I call each this term over here and just say this VI transposed it's just a definition what I define so as a result of this I can write this function s CI plus to be AI t x+ right all kind of house instead I'm interested in this function as being a function of Delta what kind of correct Christi does this Junior was it really Mia credit form busy fibronectin with the linear term realize this is a constant great form is just multi-dimensional great function the tumor just already heads the totem in ers and we're accustomed to okay well that's good to know who exploit next thing we do is we have to find the visionary first let's see how big a flute I hope eggs smell it's not person again because they use the linearization interest information is the comm override or the individualized Ti so you can just put accepting those sorry normalization or normally where you reduce westernization differently somewhere this di we somehow realize but why a will normalize that mean that if I sum up the era of normalize through the air will always be one whatever we're not in the probabilistic world at the moment where we sum of all possible values we get with a societal probability where we know that something about the outcome of all the events is one stop it there so there's normalization okay so by just rewriting this room a little bit so moving the salmon to be individual serves him right and as a son or se is plus 2 times the sum over VI from foster right Plus that makes us poles sighs device just move in so and now I can actually use this to make a new definition so I call this thing see this is a nice post and this guy ate if anything I come up with a whole inform so this is C plus two plus BT plus belt is false but we keep not they'll skip this is a fencepost this what's missing so again just for credit for okay now we have so we linearized the new digital scanner error and the scalar vectorized our curves let me use this we have time we square them make sense for linear cartoon excuses squared error so they are very factual should be the tapes this is approximation that I want to solve this approximation so how we determine the minimum of this senators you may write attention zero from df/dx what this resistor okay so the cheese is up the video faith would be easy would be see would go away to be T and this would be has up to times XT something do the matrix form Pacific at philosophers on the website the matrix cookbook just look up to global pages and look what we're reading for possibly right we're ready for and then the result for this derivation is then given by to be Plus a so this comes from a German red one so f is x to the a X plus VT x then the BF x is a plus plus X plus and this is a general rule for doing this and since our matrix a is symmetric so an H plus B transpose is 2h this way this term over here as X and the fact that we had before sort of the extra way just see May 6th will any other inaudible or a transport totally hit one of Marik's right see where we are triple sliced so in case you miss to write that down that's not a problem at all so we have already derivation anything you in the black for it's also written down here next week look up you come to be ready for and in the end the end of this term over here are there BTW this was exactly the same this form the earth is so next thing as we'll set up correctly need to derive the function set it to zero so it will also be reference to Magic Circle sex before yes Lulu's that this is a general rule if I have this function over here at each positive post since in our case H cementec mr. s is to H and then write that as the derivative of the function so it is not processed in approximation because that approximated at been computed Li the exact derivative of being approximated function network it's approximate derivative of the original time so this is exactly these terms to be 1/2 H dot X okay so and all of us fall what I do then just move the first derivative next thing I do I set the first derivative to zero zero two equal to B plus 2h don't Thanks and this leads to a linear system then what I had before I set the first derivative to zero end up here and so what I do here I divide by 2 minus B here so H Delta x equals minus b and then computes the solution here after multiply this H inverse left side already this year this Kansas out only X so Y is another minus HD purse so I kind of a get by when you update the Delta star as soon as that one half and this is the increment I need to have I need to add my original state occurred state you know to come closer to mine and then I said need to iterate this procedure because I used an interrelation the spinner ization is dependent on the linearization point and therefore the steps I'd only done assuming that I really have a quadratic error my parent atomic quadratic in reality so I need to read linearize and every point in time in order to get a better localization okay so just to summarize this is sort of gauss newton approach about solution the gauss newton approach and linearize around my goodness may X be computed for every measurement this the error term was the linearization so in practice I really have to compute mr. coconut again when mr. Cogan depends obviously on the many type of measurement I have first derivative I need to derive me expect the observation function depending on the properties of my center this gives me this result then they need to compute the Terkel linear system is because post and H or B and H just definitions we had we did with their relations services the error comes close time to information maybe it's time to go get and this is H is jacobians false information matrix the cobia and that's why I need the components review them because I need them here in these jurors and that for today's mine in the air system assault in the air system and then I update my state so the new acts from next iteration is the old X plus Delta that is writers procedure until this tells X is zero or very close to zero there hopefully these are we at the solution better system that is a plausible fight should make sure regularly run continue to read example of course go ahead No the course acidity thanks for a video shoot you didn't need if you uses in practice you have a diadem advantage of certain number of iterations to your own taste calculation yes society the calculation time because we the Jacobian matrix so I mean yeah I mean Leslie grows with the dimensionality of your problem so feeling that the common just depends on how complicated your function is but the complexity really grows through the state space so the bigger the state space get the longer your message and obviously also depends on the structure of the elephant sometimes if your error cracks are very very bloody he can be very well our Fox Valley the viability of our team because we're very quickly if it's linear convert directly and if it's okay but if it's very ugly and very nonlinear shape to be practical in the small environment not I mean the complexity grows with the state I have met him told anything about all the complexity grows how many generations I need counsel it also rolls through the dimensional state space let's see how this involves any further questions okay so brethren Andre okay Hey well let's continue here with the was a small example of how we basically use the approach that we do right here in order to come up with a practical approach a choice is showcased where it's actually usable so in the end here here is Dottie country so related to the exercise gene content via tasks mystically realize what I'm trying what I'm explaining here now feeling on top so assume you have a robot which computes a story by Counting revolution of the wheels and the system to reveal has errors so a lot of these are perfect otherwise it would be a slap what do you mean I would like to do with 60 means a system many arrows say what and I can use calibration to actually calibrate those parameters and to come up with a corrupted corrective atomically which is not the perfect odometry information but at least is no systematic error so that's kind of the goal so we want to pre correct although to get rid of the systematic errors and assume we don't have any information about houses or robot works so what this is the driver forward so we don't want to make any assumption on how the say model explicitly that's the size of the wheels that may be different or the a pressuring world will be and I wanted all of the DJ's just whenever a simplest possible solution to allow free vibration okay so assume we have some ground rules about read information let's call it UI star and this is available from some external source so maybe that I put I walk into the most tetris to do with aromas driving around an external motion capture equipment which actually estimates the position of course very very accurately another thing to could be asked and matching approach adjust when used hopefully or I already have some system which works very well as they used to be Thompson the small size Department and the use this as kind of correct automatic around throughs 11:3 so I can actually run this mapping system to record a small data set and based on this data set I want to compute the so called calibration parameters so then I can then use aroma in any other context and I should get a bias really along tree without systematic so I have one dataset hundred twenty dataset we are attempted up to information and I have the drum shoes or Dollar Tree because I know how road so what I want to do is I wanna define a function which transforms field obviously UI into an corrected odometry so that the difference between the corrected along tree and the ground on three as well as possible this exactly the great informative so I don't remember any such without notice gifts don't inherit mostly sensor measurement error sizing warrant slipping on the ground so it's worked so give to the error sweetie nobody Center or because what you have you have the corner this is recommended Daniel's and you have all the center which measures the otakus even if you miss a tick at some point in time to the babies these discs are have a special code on their if you miss one blank white transition you can actually recover that interested especially in there so it's quite unlikely that you were your thing is counter-revolution of the wheels makes an error it's more likely that here slip to more likely that you have the wheel diameter doesn't sleep perfectly depending on the payload it wouldn't robot the air pressure here here fresh wheels is change the the length of the axis of the wheels is operated from maybe is unscrew the reason to the wheel change the wheel it's just off by to three millimeters not much but if you accumulate these errors or wonder periods of time is committee difference standing like to become you know every time I say you change your yourself let's say or you can recalibrate this is messy and you have tires and you lose air lose the air pressure then you need to recalibrate from time to time yeah so for example in this case around campus the factory cannot make you miss a minute downtown so to be kind of great then once it's much better than what you had before you may need to be kind of grated if you do a change in the system so you can change the air pressure if you change the configuration overall and it's the attention you Center it becomes more heavy heavier were you see unscrew the wheels take something put the wheels back it's like making all things slightly different way set you the crab has changed a little bit and then calibration house any delicious so general comment whenever you have a wealth of a great assistant your problems gets your problems get much much easier so it's worth investing some time okay so what we're gonna have to be you won't have time to which met the manometry beginning into a corrected or pre corrected odometry and this time depends on some parameters these parameters like let's say we don't we don't consider any internal information about where the wheels are located how many wheels robot hands walking the names of the exes modeling the errors in there we make it better if we have that information in mobile in explicitly but you say we don't have a propensity for every system we simply say it with a three dimensional of the machine so let's say some state rotation translation rotation overtime routine whatever we used we say we transferred to a function which is just a matrix of nine inches the three by three matrix which just transform this raw numbers information into regulatory information to get rid of the systematic errors and this is ninety science students say I have these nine values over here with former matrix and this is exactly I can do more background information and we find a better function no no doubt about that but that's going to be easy street so speech everyone walking function f does okay so what do we need to come to that in practice or s all these values with you if you do that we need this adultery and the perfect equality okay what's typical explosive ya know with all the nine entries of the matrix so my statement told X consists of nine different buttons and all these studies so my function perfectly animal head okay what's the error caused the error function look like it's kind of the ground truth obviously a health and the credit alone so take these two values into the count yeah and if I write the rules you star minus F it's goodbye x times u I like this right Hey so how does Jacobi of this function we love that trivial well Jacobian of I so I need to divide first derivative of this function over Tobler snakes towards my state so it's fine this case the deed I the X so I first how about you - Chloe defined bag all the slides distribution of the Jacobian here the first role I had first dimension all the time so you're asking me to do a components of X and then towards a different X hey the faction is the value of UI before it for the first time so that's going to exit whatever you call it - x1 1 times the first value of UI plus x1 2 times the second value of UI plus x3 times the third value of life you are right and then this is the punishment that I need to be right that this compactness d/dx one off this you I don't respect the Nexus star minus x1 1 times u I x plus x 1 to x UI y plus 1 3 times okay so harm just explained me she was to the first dimension of X so the first dimension of F do after the first dimensional things so the kind of this is the first entry of the Jacobean experience no it's not this it's one point just first arrangement max this is the study Madrid three-dimensional this is a specified in the top or oversight rotate over station derive this function was wrestling this guy over here - you buy does only this term Supplies Plus - there's all these terms might find dependent oh it's this Gulf War illness this repeats for all ages so what is left over is so all that internal join us zero so everything else service to us the first element is just realize my students for it's exactly the Welcome who told us as you evaluate the second one in the second so this is a relative it's especially for I've seen it well to summon any tomatoes at all how many dimensions of this matrix s 9 their homes and 3 roseus identities party I mean of course he's probably sub-zero but that's not what I want what I want to point out what one is the point you love food what's the important thing about this jacobian why is this a very very easy very nice fall for us because how many hints on X we divide it and this function just depends no doesn't offend the ladies range that means the whole thing we have is linear they're completely linear problem so there's if the Jacobian does not depend on the linearization on X it's the same everywhere and that means they have linear function that means we have a really attractive to meet the scalar error which is a really quadratic form is not an approximation so it does depend on X what does is impose the scalar R factor is really and not approximated phoretic form we don't need to iterate because we directly approached the correct solution is linear so the this is the ball z so V the vector form is linear function in X so there's no need to iterate for us directly executed one shot we have a solution we know of renders we're happy there are questions about that I had to purchase how about the Forever's look like if the optometry is already perfect to the parameters the function f with the identity matrix so X 1 1 is 1 X 2 2 is 1 X 3 3 is 1 and all about loads of 0 so how many measurements without a few measurements will actually need at least in order to come up with solution solution you're all better cool it correct that you have a quantity implemented in your brain the hideousness absolutely right I have nine unknowns but every element Reformation you mean three damages so at least three long-term measurements there's enough you can iterate three equations for measurement during 90 patients I've had unknowns so the minimum amount of what our measurement that I need of course as much memory of any more hey age of symmetric that I told you why is it symmetric in general for these kind of laws all of these define some over total of three matrices yeah it was dismissed ovations what about this matrix matrix no necessary and in this example we know but it's semantics because information matrix information matrix L matrices always symmetric so I have here a matrix transposed time symmetric matrix times its original the matrix J again so the product of these two matrix transposed M transpose which always ends up in a symmetric matrix into matrix in the multiplication is symmetric so all these guys are symmetric and if I add a couple of symmetric matrices and others that's reason why isn't in a minute how does is structure of my measurement country in general impact or affects the structure of each it's pre related to the common federer where the commencement caucus which matters only a subset of the variables when they wander this ability range of the scale with a similar effect what happens if my observation times you only observes a subset of the variables so if I can observe a lot one of the states because it can only observe one Mary technically about the risk of those if I measure only one dimension the error for the other images 0 crystal measure anything you the error only affects one dimension mean that the Jacobian of that guy is zero for other dimensions as well don't come observe anything I don't jumper tribute to any error derivative to zero since age has it spectrometer is these js0 basically all dimensions except of a single dimension reverse the matrix are generated here is super sparse this is very very little in case so the number of state variables I cannot serve at the same time impacts how dense this matrix H form and thus far so this matrix will be easier as usual later on it is to solve the linear system so they're friends practical implication if ever meets is very very sparse I can solve a linear system very efficiently so how would that help us solve in years upon the death in the template so the linear system age does X minus B and we'll solve that first standard reports there are really four each - naughty plays from the size of X minus 1 and then minus HD verse under deep but it's very very difficult these cubic capacity so better solutions to that are deflection extra memorization draw decomposition off other in terms of messes to compute the inverse and actually if you use the next letter operator and not you know to do that it actually will determine one pretty good solution of the university that solve this for you in an efficient way but just when it very quickly dissipate how to test any composition can be used to live no more so assume any matrix a which is symmetric positive-definite and you want to solve the system ax equals B if he or what I want me to do and it would be to invert invert a a multiply some inverting a is very costly so the better way to do that is to compute the transi decomposition which leads to a matrix out else exposed with sequel to a and the properties is that this Alves the lower triangular matrix so how this matrix is 0 okay but I can do them if I can first solve the equation and y equals B towards fine so that require equals all ^ - would be and then solve l transpose X is y if I do that since a lower triangular matrix it's very easy to do that because if it's a lot regular matrix you can just one step in the Gauss elimination and you have the solution so you start with the other end where you just have one entry from the matrix you know this dimension when you put it to the next line you eliminate this this line just one variable so that you get the same pair and so on so just in one path of the matrix you know to compute them and therefore destiny composition is one of me effective means to to solve it okay so to consolute approach in sum what have we done its method to minimize the squared error between East obviously mystery yet so we need some information unless table in here really linear error function for any scalar function linear our first initiative it's a guess which should be too far away from the we linearize individual our function so vector turns this midnight to quadratic form it computed the first derivative of the quadratic form we set it to zero we solve it in this leads to the state not paper estate update procedure it's a standard way to solve it yeah potentially better ways to do that more effective ways to do that assuming our see different parallels to your observation functions that's kind of the standard way and what we have to load it next time is how can you actually exploit that framework to address this landfill but before that I would like to give a very very brief sketch on the relation between this technique here and probabilities the crystal thousand everything should be done for mystically we have our models and the state estimation and everything was done using probability so if I haven't really talking about probabilities here all of us introduce this information matrix rather magically and said here we go here's Susan and so the question is how does the solution I present here relate to say estimation for mystic sense and we can actually show that the solution one finds here is the maximum likelihood solution as if every distribution is Gaussian this can be quite easily sketch so you can remember details are sketchy so internal sense based upon one estimate X 0 to X T given observations and all commands in touch by applying Bayes rule the independence assumption between measurements the market assumptions I can actually formulate it in exactly this way as a mobilization constant times life prior to make for X 0 and then the product over the most model in the base filter and the official standard way so it's just wine baseball independence assumption and Markov assumption as we've done it for quite a couple of times in this course so this is exactly the same the mystery under the assumptions assumptions we have done okay so no it's not consider the probability that's considerably look likelihood so the log of that true so just compute the normal net worth if I could be the log the product turns into a sum the first trip which was the normalization constant is constant so it's still a constant if I take a logarithm I'd love rhythm of this prior and I have a product and the terms easier okay now let's assume we have Gaussian distributions everything is accounts so we say this guy is Gaussian this guy's a Gaussian and histology is Gosselin's trees and we have to be need to look into what means if I take a walk of the gas distribution whatever X the parameters mu and Sigma this is actually a constant of a factor and then it's just the exponent is left so minus 4/5 X minus mu transpose Sigma X minus me just if you look through these terms here very carefully say this is very so much my these guys could say this is just my error function my vector asura funky the inverse of the covariance method is my information matrix and this is so be this whole turnover is exactly matters e e of X my scale economies so up to a constant is another continue but up to the constant this constant only depends on on the information pair on the covariance matrix I can actually this disturbs equipment to the individual error functions that means I can rewrite my Gaussian as another constant which takes account comes before all constitute a community on the way which is relevant for the prior plus the error comes from the laboratory information and the observation but it's nice because what a knobhead is some of our countries which are had before and the only thing I have is constantly but the nice thing is what I do if I mix it now try to maximize the what like maximize the warp likelihood also maximize the the probability of rebellion because the locomotive monotonic function anything for the maximization I don't care about the constant so maximizing this country box or a lot of this probabilities is equivalent to the minimization of the negative through and letting go all the cosmos this value over here this value of this way and this is exactly the form that I had in my heart message so minimizing the squared error is equivalent to maximizing the log likelihoods in terms of Gaussian distributions we have individual carefully so if I do this Gauss Newton approach to find the there will be solution in the urbanization sense what I do actually type the mean and everything is a Gaussian distribution and therefore we have a strong coupling between this squared error minimization and the probably so the exactly same assumption studied before and I can actually do this trick or is this equivalence that actually show that minimizing mean square error is equivalent to maximizing okay so to summarize that wouldn't be done today presented by introduced the problem of this square error minimization I present one technique which is because you'd approach on how to address the problem especially for nonlinear functions therefore need to linearize and iterate the approach use or excuse of the irritations and therefore this approach is active in a box name unless already empty and it is equivalent to finding the mole of mean of our distribution of everything see also and it's actually very very popular technique to find all disciplines so not only in computer science in really a large set of disciplines where you don't need to do regression where you want to estimate any more parameters given some observations you have this technique is one of the standard trusses okay so this is for my cents on that to trip up these festivals in generals can basically take every textbook on America's nose or on optimization and one of the first chapters lets you show that what we've done here also be ppt is quite brief and not too bad overview well that and kind of the key derivation some hints to pick up and if you want to look into the relation between the organization it is nearly impossible for a single one explode this one chapter which once are sensitive shortly currency in the derivation but mainly steps are they ready precedence Peter site thank you very much we're going to use this framework next week to dress slammer so this was kind of the general frame for today and actually we say how does it move in practice if I want to use it to address this level how does matrix B look like cause before quiet what kind of special tools do we get if our special properties that we have in the context of design program and how can we actually solve such a problem okay that's it for my side thank you mentioned CX week
|
SLAM_Course_2013
|
SLAM_Course_11_Particle_Filters_A_Short_Intro_201314_Cyrill_Stachniss.txt
|
very short introduction into particle filters in general especially particle filters used for Monte Carlo localization so for localizing robot in an environment given that we have a map we assume typically we have a map that's something like the occupancy grid map that we had before want to estimate the position of the robot in this grip map what we hit so far in order to do the state estimation we used Gaussian filters the Kalman filter for example to estimate a probability distribution and this probability distribution was supposed to be Gaussian and as if we have a Gaussian distribution and let's say be our models are not too nonlinear use the EKF it's a good approach to do that if your underlying distribution is Gaussian and if your models are not dramatically nonlinear this is the case EKF is a fine choice what happens however if we have arbitrary distributions so we won't leave the Gaussian but let's say we have a distribution which looks like this how can we actually solve this problem and how can we represent such an arbitrary distribution or consider any arbitrary distribution you want how can we use a different way for estimating the pose of the robot if we leave the Gaussian world that's kind of one of the advantages or the domain of the particle filters well and they are very attractive techniques to do that so the key idea of the particle filter is not to use a parametric form so like Gaussian but to use a non parametric representation of my distribution and what the particle filter does it uses random samples and so let's say this is my distribution and I generate it's a sample for the beginning I can see these samples as points which live in my x coordinate here and the more samples I have in a certain area the higher the probability of that area region and I may also assign a weight to those samples so bigger the dot here's the higher the weight and the higher the probability of the corresponding area so here we have that peak I've kind of large bubbles and they're where the property is very small I have small bubbles it's kind of intuitive way for how I can see samples in this example them I think more or less you need other if not uniformly spread but they can be randomly spread through the environmental samples it on it should not suggest that this is kind of a uniform sprite in general how can we do that how do we represent a posterior as I said we represent the posterior by a set of samples see every sample as a state hypothesis so one possible explanation for the state one possible state the system may be in just say if you have 1,000 samples it says either what looks like this or like this or like this or like this 1,000 times or like that that's the key idea and if you want to compute my I'm going to compute my probability distribution out of this sample set just say okay what's the probability that the true state is within a small region just look into that region count the number of samples that are in the higher the number of samples or the higher the weight of the samples in there the higher the probability of the true state lies within that small area to be clear that you may need a large number of samples to represent the distribution well the higher the dimensions of the function estimate the more samples you need and if you have areas of large uncertainty you may need a huge number of samples let's end there's no free lunch you are more flexible and what we want to represent but this may come at a high computational cost we consider a common filter which has whatever a very ends of in 1d let's say you have a spatial phenomena of over 1 centimeter it's one centimeter your standard deviation orbits 100 meters you don't care the same you need the same computational resources for samples if you want to have a certain density of samples you will need to add a dramatically larger number of samples if your uncertainties vary so that's a disadvantage of the particle filter okay so these are the state hypothesis X and I typically have an important what's called an importance weight so it still waiting term in the beginning of the weight can be 1 or 1 divided by n so the same weight for all the samples and I just need to count with samples which fall into a certain region otherwise I need to count the same kind of count up the weights which fall to a certain area so if I do it that way in representing their way the probability distribution that I obtained from that this sample set represents is just a sum over all samples the weight of the individual samples enter the drug distribution centered in the state of the sample again the right distribution it's the distribution which is zero outside the state it's infinite infinite at the state but the integral is one so so it's extremely narrow and so as you weigh if the samples are normalized it means the sum of all of all sentences 1 if you compute this term you get a proper probability distribution of course it's kind of very peaked that's what the probability distribution looks like but if you have a sufficient number of samples and you look kind of into regions wave which we want to compute the probability it's the same as if you would count the samples but that's mathematically what the probability distribution looks like that is represented by a simple set so is this clear what that means okay let's so just try to explain it again if I've n samples distributed over the space I say ok for every sample I have a probability distribution that this means it's a state hypothesis state is in that state so it's witches has a high value in that state and it's zero anywhere else and since I have whatever n samples if they all have the same weight it's kind of 1 divided by n times the Dirac distribution centered in the position of particle 1 the plus the Dirac distribution sampled in the state of particle 2 and so on input force and if I change the weight as soon as I make sure that the way it sums up to 1 that's a probability tribution that I'm maintaining so I can use these particles or samples for function approximation so this is the Gaussian distribution that's a sample based representation how it looks like of course the more samples I have the better my approximation will be and you can see here in the area of the mode there are more samples concentrated and they're less samples outside towards the tail of my Gaussian or if I have whatever this fancy distribution of a high concentration here and also a high concentration here and smaller concentration down there obviously here from the particle representation it's kind of probability zero that anything is in this area it's just because I have used enough samples so the more samples I have the better my my approximation and the question is now how do we obtain those samples how can you actually generate those samples things that can be tricky so there some distributions for which I can sample in close form one example is the Gaussian distribution and if I want to generate samples from a Gaussian distribution that's one technique to do that so just take 12 random numbers between minus and plus the standard deviation of the Gaussian distribution sum all of them up and divided by 2 this generates samples from a Gaussian distribution that is centered around zero with this standard deviation more details on that you can find the probability robot in the introduction to bowler robotics course when it's talking about how to generate samples how to do samplings you find an explanation for that also graphical examples how you end up with this distribution just repeat that the important thing for me here is for certain number of distribute for certain distributions like for example for a Gaussian you can efficiently draw those samples generate those samples but the question is what what about the other distributions whatever if you have some weird odd distribution how do we generate samples from this distribution and one technique to do 0 is as important sampling and it relies on the so called important sampling principle well the important sampling principle tells us is let's say we have a function f just also called target distribution we want to generate sample from this target distribution unfortunately this target distribution has some form where we cannot generate samples from in closed form so how do we do that the important sampling principle tells us that we can use a different distribution which is called G or the proposal distribution or sometimes called pi later on and we can actually generate the samples from this other distribution from the proposal let's say we take a Gaussian at the proposal which is different to what to the target but we just use another distribution so we generate samples from this proposal distribution and then we can do a correction taking account the difference between the target distribution and the proposal distribution and this introduces a weights of local importance weight and this is given by evaluating the value of the target at the sample location divided by the by the proposal distribution and this gives me a weight so if I have my example over here this if this is my proposal distribution the rapmon the calcium this is my whatever a target distribution what I do is simply I generate sample from this Gaussian distribution so the spacing in X the X location drawn here are from the Gaussian but then I rewrite those samples by the difference between both distributions so if there's a big difference I will get here large weight or depending on the ratio so a large or extremely small weight in this way I kind of correct the samples and I obtain samples which follow my target distribution that's what the important sampling principle tells me I say got that before that's important sampling that's not exactly true importance but important several uses to generate samples here okay and what the particle filter no does it uses those samples to represent the posterior and uses exactly the important sampling principle to up date I believe so it's a recursive based filter it is a nonparametric recursive based filter that means we do not maintain for example a closed-form distribution like a Gaussian distribution we take those random samples as my representation of the underlying distribution and I update my samples based on the motions and I have and based on the observations that I obtain and it does has two steps again a prediction in the corrections that the prediction step draws samples from the so-called proposal distribution there's this revision I can easily sample from and in terms of localization I typically use my motion odometry to as a proposal to generate the new samples taking account the motion of the platform and then I take my observations for doing the correction and therefore if we do this we see the connection between the prediction of the correction step and the important thing is the more samples we have the better is the estimate so the particle filter has kind of three main steps the first thing is it samples from the proposal distribution so this is kind of a generate samples from my proposal PI over here this is a distribution which I choose as a designer and from which I can sample efficiently for example a Gaussian distribution what I then do I then compute the circle importance weight which takes into account the function that I actually want to approximate the only thing I require here that I can evaluate point by second for every point in space compute can evaluate this value and divide it by the proposal and this kind of does the correction telling me what what's the difference between the proposal that I chose to generate samples and the function I want to approximate and this gives me this local importance weights it's kind of the correction for the fact that actually samples from the wrong distribution and then there's a third step it's a so-called resampling step it's kind of a sampling with replacement so I have mine on my weighted sample step after step three and then I just draw J samples out of that with replacement and the probability of drawing a sample is proportional to the importance weight you can see this informally speaking as kind of a survival of the fittest saw only it's quite likely to pick a sample with a high weight think of the good samples where the approximation was good and I eliminate I'll leave over and let those die which haven't been drawn which are more likely to have a small weight and this way I kind of focus my sample set on the lightly part of the state space okay let's look to the algorithm how do the particle filter algorithm look like I have my old sample set as an input I have my controls and I have my observations so same input as the base filter previous belief control and observation so I say okay my I start to new set so this is going to be predicted set and the correct set they both start with empty sets I iterate over all my samples in my previous belief and I obtain that a new sample for J by sampling from my proposal distribution what in practice happens in localization is that I take the sample out of my previous belief apply the motion forward and then sample around that area then the second for every sample I compute the correction by taking the target distribution divided by the proposal and I add that to my predicted sample set this is what I do fall samples and then this is what we see down here as a resampling step so we draw a particle I proportional to its weight and include it into my resulting sample set this is what I'm going to return okay so this was kind of more general particle filter let's look specifically in to Monte Carlo localization it's just a specific instance of particle filtering so what we do in here is each particle is a poll hypothesis of the robot so as an X Y theta state the system may be in one estimate where's the system at the moment X Y theta what I and what I typically do in Montecarlo localization I choose at the proposal distribution I choose my motion model so how that the robot's motion Beauty changes the state let's say I take my sample out of every sample out of my previous sample set and then let's say the audiometry said the robot move the meter forward what I do is I take this sample and move it a meter forward and then may add some sampling noise around it but that's basically the step that I do I take every sample out of my previous sample set and apply my motion model in a sampling Fisher sample from that model and this generates me the next generation of those samples and there's also one advantages that I don't need a linearization of my motion model in this case because they can have an arbitrary odd function because to just propagate one state through it and compute the new state sampling from that and then I do the correction step and the correction step takes the value of much Hargett distribution which is really the same distribution we used in scan matching before likelihood of the observation times the likelihood of my AMA tree it's my target distribution and divided by the proposal so by my odometry and I obtain this correction so it's the exactly my observation model and I have to say here this is a very special case of the particle filtering because if I use my my Adamo tree as my proposal distribution my dama tree model then the correction is a term proportional to the importance weight if I and I'm free to choose this distribution my proposal if I choose a different proposal distribution you ever learned about the different correction term because it's given by target by proposal so I completely see that the step from the important sampling principle to this slide over here may be a little bit quick it's kind of longer the relation which shows if you use the motion model as your proposal that's your final observation model that you're going to obtain and so we can derive the particle filter algorithm for localizations is exactly the same algorithm we had before except that we have changed those two lines which are underlined in red in here which is my odometry motion model as from which i sample and the observation model for doing the correction and if I do that that's what I'm going to obtain so what you see here is sample spread all over the place so all these red dots are samples as to saying that's one possible pose where the robot is at the moment and reality the robot was here starting over here and moving in this corridor that's a path a robot took this is illustrates currently best estimate in the beginning is random because it's just one particle because all of the same weight and what you see here and these blue lines are kind of the sonar readings that the robot actually perceived so these are the sonar scans and what we see right now is the number of prediction and correction steps and resampling steps which evolve this state so we propagate all samples forward according to the motion of the robot we weigh them and then we do the resampling step and every resembling step kind of the bad particles are more likely to die out and the good particles are more likely to be to survive okay ever start that video now you see what happens so the state is updated you see this particle moves a little bit that's the motion of the robot you can see after few step the system identify that the road must be in the corridor because that's the area which is best line with the observation so we have now two states here or here because corridor symmetric so there's no way to figure out which one is better until they're what goes into a room and can say okay the observations fit to this room and not to this room so these samples died out and the sample set survives over here and that's kind of a nice thing because as you have seen you can during the estimation process you can have multimodal distributions you can say either here or here and only until I've seen enough information I can actually tell where I am in reality and there's an advantage if you do what we call global localization so you start from having no or uniform initial guess that's a strong advantage of this particle filtering techniques compared to things like more techniques like the Kalman filter which has one mode you could of course start with an Gaussian distribution which is close to infinite Senate EVM no variance but you can't maintain multiple modes over time so there's something which actually doesn't work that valid practice okay a few words about the resampling step because is one of the step which often sound a little bit odd to two people so why do we actually do that that we we have our sample set with the weighted sample set and then I just pick out n samples draw n samples with replacement and the likelihood of drawing a sample is proportional to the weight why do I do that the thing is that the the first thing we do is the first reason why we do that is because we have a finite number of samples is with a finite number of samples we want to concentrate those samples in the kind of the region of the state space which are substantially above zero having zero probability or density which is above zero because it can happen that let's say we make a few wrong decision that the particle go into areas which have basically zero probability if we have an infinite number of samples we don't care if we have a large number of samples going to this unlikely regions as long as we have enough samples in the in the areas with high likelihood but if in all practical applications we only have a finite number of samples we can represent on our computer so what we do is we kind of let the bad ones die out and use the sample to actually represent sampling the high likelihood areas you can see this as a kind of survival of the fittest or mathematically sound survival of the fittest principle and so this kind of trick avoids that you have a lot of samples which go to this basically zero probability areas and so there's needed whenever we really have a limited number of samples and of course the more samples we have the better approximation gets but if we don't have infinite number of samples we need this resembling step and in practice this resembling step is actually essential so if you if you don't use the resembling step you won't be able to track the pose of a system over long periods of time because your filter will simply diverging you want to won't have any samples which represents the likely areas every sample in the sampling process makes an error these errors to be accumulate so there's no way for recovering is unless you have an infinite number of samples or you do resembling paper for practical application resampling is really important I can actually illustrate with sampling how can actually implement that you can implement that with the so called the roulette wheel so what you have is you have a red wheel and the size of the buckets are proportional to the weight and as you take the ball put it to the roulette wheel and you take if it ends up let's say in the bucket w3 corresponds to the particle number three weight three you pick sample three and put into your new sample set just you repeat this process n times so you say turn the right wheel ones yeah another here do another one here here and you continue like this those are called roulette wheel sampling that's a standard approach to do that you can do that quite a fish or medium efficiently they're more efficient ways but it's not too dramatically bad and so if you arrange this set here with the binary search as within with an area you can do binary searching to find the distribution then you can do that in jail up J so you need you to repeat this process J times if you have J samples and finding which sample course but which sample corresponds to the bucket that you have chosen you can do a lock n binary search here and lock J so J lock J so if you have 1 million samples you have 1 million times locked 1 million this number of operate proportional number of operations you need to do there's have a better way to do that you can also use the roulette wheel and the thing is what we take now here is we take a uniform spacing of my cut balls or pointer that I use which points to my bucket and what I do is I just turn this roulette wheel once and then I pick the sample which corresponds to the bucket to which these errors are pointing so I just draw one random number and I obtain samples which are chosen according to the importance weight and this is something which we caught which is called stochastic Universal resampling also low variance resampling and it has certainly that just the first advantages that you can actually do this in linear time because you can if you just draw one random number and then you iterate one through the through the buckets to find the corresponding packets the other advantage is what it does is if we have a set of samples and they all have exactly the same weight this can happen if the observation doesn't help me to identify which sample is better than another one and what this will guarantee that we obtain exactly the same sample set than we had before if you would use this technique this wouldn't be necessarily the case so maybe a sample gets chosen twice and one simply not chosen however if all samples have exactly the same weight there's no way for me to say all this particle is better than or this sample is better than another sample so I should keep my sample set as it is and therefore this introduces will blows variance into that process and therefore there's actually the technique that you should use I also put the algorithm in here I don't want to go through that algorithm but the important thing is that it's kind of you want process through the data set and as one of it's one of the exercises that we will hand next week we'll cover this low variance resampling you find the algorithm here on the slide and your task will be to implement parts of a particle filter and this will be one part of that so what the approach in the end does it it just draws one random number as the starting point and then iterate with adds to the cumulative weight one divided by any kind of jumps always 1 divided by n through the roulette wheel and always take the sample which corresponds to the corresponding bucket so if you do that there's another example put all that together this was actually one of them first Museum to a guide experiments where Monte Carlo localized in the context where Monte Carlo localization actually was developed so this is a museum and you want to localize the robot where it is kind of the top view of the museum and you get an observation so it is actually a robot with two laser range scanners one looking to the front one looking to the back so it gives me the observation then we do a resampling step so it's kind of the arrows concentrate motion update sorry and others were to actually resembling a motion update and then take again a measurement the weight is updated resampling is done you see how the particles died out motion update is performed you can see from here to here when you do the motion update the motion introduces noise because the motion reduces noise and therefore the particles are spread out this comes from the sampling procedure then you again take a measurement do a weight update so you see some samples of it darker than others a darker here the higher the probability and then you do resampling and the unlikely states die out and it concentrates on the more likely ones and then you can actually continue this process and continue continue localizing the robot it's kind of this is a technique called butter color localization and it's today the gold standard for robot localization so basically I don't say all system because it's not true they're also particle filter based systems but a large majority of systems special results which don't use predefined landmarks use Monte Carlo localization the localization technique so the summer particle filters I know that this was a very short introduction to particle filters and you may revisit particle filters again either through the reference that we'll provide in the end through the book or through the introduction of mobile robotics course where particle filters have been introduced more extensively so what particle filters are they are a recursive phase filter and especially a nonparametric variant of that that means that we don't have a Prometric distribution Petric function which we use to describe our posterior but we use random samples to do that and we draw samples from a so called proposal distribution to advance to the next state and then do the correction in order to take into account that the proposal distribution was not the target distribution that I want to approximate and then the final step is a resampling step and this is a technique which works very well especially in low dimensional spaces so the higher the dimension of the state space the more samples that I need at some point in time it simply gets intractable Monte Carlo localization a product is a special instance of particle filters so doing localization with particle filters and it uses the particles to represent the pose of the robot between the XY and the orientation theta it uses the motion model typically as a proposal distribution and as a result of this choice the correction of the importance weight is corrected with the observation likelihood and this technique is called Monte Carlo localization or MCL and it's kind of the gold standard for more robot localization in robotics today in most applications there are other applications for sure but this is one of the standard choices you fight more on Monte Carlo localization in the producing robotics book on particle filter localization as well as on the particle filter itself depending on which aspects you want to focus on more and in order to implement the particle filter you of course need the motion model and the observation model and these these are exactly the models that is plaint I think in the first week or second week of this course these models are exactly this model to use in here but here's again the reference to the chapters in the book if you want to revisit that another was kind of a quick and not too detailed introduction to the particle filter but you I think it highlights the the key most important things first thing we need a proposal distribution which is used to generate the next generation of samples and it uses typically the motion model there are other choices I can set those choices as I want them as a designer but the motion model is a common choice then I have to do a correction step which takes account the difference between my target distribution in my proposal distribution and this gives a weight for every sample and I use this to correct for the difference between what I want to approximate and what I approximate it so far and then the third step is a resampling step and this is kind of the key concept that we need also to understand how slam works with particle filters just need to define a proposal distribution we'll start also with the same proposal the motion model later on in the course we will say they are smarter ways for doing that a better ways for doing that which then will also impact the way we compute the weights and again we will do the resampling step these are kind of so just focused on kind of a little bit on the high-level concepts here in this course today it's just enough whatever have an hour 45 minutes on the particle filter but this should give you the basics for understanding what's going on in the future you can you're of course happily invited to dive more into the details but this can be time-consuming to do that depending on how deep you want to dive so that's kind of the minimum amount that you will need to understand the subsequent course if this sounds really unfamiliar to you or you think I haven't fully understand it I recommend you to revisit that until next week when we dive with the particle filtering in more detail especially doing slime of fast slime one variant of doing slime with particle filters but the concept we have discussed here are kind of the key ingredients that you need in order to stand what's going on are there any questions at that point okay so then that's it from my side thanks and we see each other next week next Monday thank you
|
SLAM_Course_2013
|
SLAMCourse_03_Bayes_Filter_201314_Cyrill_Stachniss.txt
|
welcome to the course today we are looking today into the base filter and we'll revisit some of the um basic concepts behind um base filters and then we'll look also into the extended Calon filter or Calon filter and extended Calon filter as one way of implementing or realizing a base filter um that is frequently used or has been frequently used in um the context of the simultaneous localization and mapping problem okay first so General so the the first hour today covers topics that have been addressed in the introduction to mobile robotics course um and then in the second part of the lecture today we'll dive into the extended common filter and extended common filter some of you may have seen that but I'll try to give kind of a complete picture of that common filter algorithm so that you can then next week um we can even go more in more detail and realize a slam system using an extended common filter and that we also will also be part of the homework assignments um which will probably uh come out next week or in two weeks so just to put that into perspective or into context what we are doing here the um base filter is one technique to do state estimation so um we have data we have typically sensor observation and we have control data that means we have sent commands to the robot that's what is expressed by you over here and we have observations which are expressed by Z over here and we want to estimate the state of our system whatever that state is could be the position of the robot in the environment could be the position of a landmark it could be the state of a door if a door is open or closed it could be anything we want to estimate and we which we can perceive and then we can potentially modify by executing any action so it's kind of a general framework for State estimation and the overall goal is to estimate that posterior so the the state of the world XS given our sensor data and our controls kind of all what this state estimation is about so if you don't know anything about the world we start with a uniform distribution that means every state um is has the same likelihood if you don't know anything and as we acquire observation and as we execute actions we get more certain about the state and so hopefully in the end we will have a very very Peak distribution around one state which is hopefully the correct State again we can never say the system is exactly in that particular State because we have a probability distribution here so we only get distributions but typically we can say the mean estimate is whatever uh the state is in a the world is in a certain State that's something we can do and um as the name base filter suggests we are trying to estimate this posterior here using base Rule and um some other um rules from probability Theory to come up with an equation in this case in recursive equation um that allows us to integrate allows us to integrate one observation and one control at a time and recursively estimates the current state of the system okay so the current state of the system is often um also defied as the belief the belief about XT so the small index T here refers to the um to the current time step T and this is exactly the probability distribution about XT given a sequence of sensor observation and a sequence of commands the sequence is expressed here by this one to T and U1 to T so we have t sends observations and T controls that have been executed so we have a sequence and we want to estimate the current state of our system so this is just the definition now let's apply base rule to x uh XT and that t so we kind of want to swap those two variables and this um base rule tells us how to do that so if you have P of a given B is the same than P of p u p of B given a Time P of a ided by P of B so it's a standard application of Base Rule and if we apply base rule we end up with this equation over here so what we have done in here is we swapped XT and that t so that t is now sitting here and XT moved over here it's kind of the only change we did in this part then we have the the second term um which is p of XT given all variables but not ZT and the this is the normalizer uh the normalizing term which we are not interested here in estimating um so we it's just written in this compact form with the normalizer so this just the basic um execution of Base rule nothing else has happened here just base rule executed okay so then we can actually look to the first term over here this first um distribution and um we can you say if we want to know what's the probability of obtaining a certain measurement ZT and given that we know the state of the world we can ignore all the previous measurements and all the previous commands we've executed this is something which is called The Mark of assumption so given you know the state of the world you can forget about what happened in the past and that means we can actually get rid of these observations over here and these controls over here again this is an assumption it doesn't necessarily has to be the case but it's kind of a standard assumption which has been done given you know the state of the world you can actually estimate um uh the probability distribution over the current observation so if you have some sensor with a bias or you have an error in your systematic error in your sensor measurements the previous sensor information may help you to get a better get get a better estimate of this distribution but that's something which is ignored here so we really say it's an assumption given we know the state of the world we actually can get rid of the past observations and the controls executed so this term dramatically simplifies to this term over here so everything uh which is this red bar over here is kind of the the quantity that has changed um so this expression over here turns into this expression the rest is left unchanged okay so let's have a look to the second term over here this is um want to estimate the current state of the system given the past observations up to the time step T minus one so we're missing the last observation but all commands executed it's like having an estimate up to time T minus one plus executing a motion command and therefore we expend this term over here using the law of total probability we introduce a new variable XT minus one which should represent to the state of the system at the previous time step we can introduce this easily using the um law of total probability so this guy here will turn into an integral over the same variable XT the same is these quantities are exactly the same um then down here except we have this additional newly introduced variable XT minus one and we integrate over this variable XT minus one and then have to take this term over here times the likelihood that this um variable occures given the same um information we had before so we introduce a new variable and we we integrate over this whole variable so the expression will stay exactly the same so again just the application of the law of total probability any question about this law of total probability or is this clear to everyone mostly mostly okay um maybe we can just write it down on the Blackboard uh with less variables maybe we can't clear so if I have a distribution about a variable P of a the law of total probability says we can introduce a new variable B another event and say um we interest in P of a given B times the likelihood that this event cures and we want to integrate over all possible B's that means so D B so what what that means is if you want to have the probability distribution about a variable a we can say Okay How likely is it that it a occures given that we know an additional term B then the likely that b takes this value and I integrate over all all possible outcomes of B so if you write it for the discrete case it may be um easier to see so some overall all Bs P of a given B times rest stays exactly the same P of B so we we sum over all potential values that B can have say what's the likelihood that this value occures and what is p of a given this value of B and if we do that for all possible values of B we end up getting exactly P of a this is was called law of total probability the sum and the integral is the same um so this is for the continuous case and for the discrete case so if you have a discrete random variable you have to sum of all possible outcomes if you have a continuous variable you cannot enumerate all of them then you have to use the integral okay so just the application of the um law of total probability and then we look again to this first term over here and again we apply the MK assumption that we used before in a very similar setting saying if you're interested in estimating the current state of the the world and given we know the previous step of the world everything else we have seen or done before that time stand is not interesting to us so we actually can get rid of all the observations which were sitting here in all the commands except the last one because UT is executed um to go come go from XT minus one to XT so this tells me something in how the state of the system should evolve from time step T minus one to T So this we need to maintain so UT uh needs to maintain all the rest can go away according to the mark of assumption so um this equation over here simplifies to this equation over here and this is just a term which says given the for example if you estimate the state of the robot given I know where the robot is at time XT minus one T minus one and it execute the motion commands UT for example meter forward go one meter forward I can have a probability es estimate about the position of the robot at the current point in time which is somewhere around my predicted pose that's exactly what this term tells me over here yes please and line two use the same Mark but you got rid of all the use from line two to line three um from line two so from here to here okay so um exactly because here we were knowing this the known state was XT and here the known state is XT minus one and therefore I can only get rid of up to the state I'm I'm I the state I know okay so if I if I if I know a state and I do something in the future this may help me to estimate my future state but nothing which has happened in the past or up to that time step exactly okay um and the next thing we do with another Mark of assumption in this recursive term down here um or it's actually Independence assumption saying if you want to estimate the state at XT minus one we care and we don't know any previous state we care about all the observations and all the controls except the control command which has been executed in the future so if I say if I want to estimate the um the state of my system up to the previous time step I don't care which command that the system executed in the future this is obviously again an assumption this is not necessarily true if you can think about a robot which goes just drives just forward and XT is the position of the robot um if I know if if my if I have two states the system can be in so I say one state is this state over here and the other state is this state over here these are my my bimodal belief and if I if I know that the next motion command is go one meter forward it's quite likely that I'm not in this state because it would lead to a collision it's more likely that I was in this state so knowing what the system executes in the future can under certain circumstance SES allow you to make a better prediction but this is something which is ignored here so it's really an in assumption that you assume I I ignore that I assume that the the motion command execut in the future doesn't tell me anything about the state at the previous point in time but again this is an assumption it's it's it's not bad to make an assumption we only need to be aware of those assumptions so if something breaks or fails later on we may revisit our assumption say hm is this assumption perhaps was this assumption not justified okay and then if you look to this term now over here this term looks very very similar to the term we in the beginning except that the index T minus t is replaced by T minus one so it's kind of a recursive term this is just this guy here is a believe of the system at the time step T minus one so I can just rewrite that and express this as the belief of the system at the previous point in time okay so we have now so it's important thing we have an recursive update scheme which allows us to estimate the state of the system based on the uh previous state and the current motion command UT and the current observation ZT that means we have if we have a probability distribution about what this in which state the system was in previously and we execute a motion command and we obtain a sensor observ we can actually compute the state of the system at the new point in time and this is exactly what the base filter does so it's an recursive update scheme which allows you to update your probability distribution based on a command that the system executed and the observation that the system obtained yes please so when we calculate that do we do that online so that the believe XT minus one is just a value or do we calculate it recursively so we we could we could do both typically it's used in an online fion that's kind of the key advantage of the or one of the advantages of the base filter that if you say given I specify the the distribution at the point in uh t0 so I say okay the robot starts here or I have no idea where the system starts have a uniform distribution whatever it is it's kind of your initial belief and then once you get an observation once you get an reading you can estimate the time T1 given you a knowledge about t0 and then if the next observation and and command um is obtained then you can compute the state at the point in time T2 given T1 so you really use that in an online fashion as soon as data comes in you can estimate the next the next step because in order to estimate this guy over here you don't don't need any information about the future that's kind of the key thing so um to estimate that you don't need any observation or sensor command that is obtained in the future of course in theory we would have to that was our independence assumption but under this Independence assumption um as this term doesn't in include any future sense observation any future commands um we can use this as an online fashion if there would be future observations in here we could not use this in an online fashion because what I measure in the future influences my current state again this is an independence assumption that the base filter does but given this Independence assumption it's a powerful tool for doing online State estimation any further questions about that okay great therefore you find also the base filter often written in in two ways in a prediction step and correction step the prediction step takes to account the command which was executed and the correction step takes into account the sensor observation and the predicted belief is always the belief often the belief with a bar over here um and this is just how do I go given know XT minus one how do I estimate XT given XT minus one and I need to integrate it because I don't know exactly what uh the system was at time T minus one this is a probability distribution so I kind of need to integrate about all possible States the system can be in in the previous point in time and then can compute where the system will end up with um this is the prediction step and then we have the correction step which says okay given my my my predicted belief I can now take into account my sensor observation to um um increase for example likelihoods of the states which are in line with my sensor observation and then we have the normalizing term that um all possible the the sum of all possible States or the integral of all possible States needs to sum up to one or integrate up to one that's kind of what this normalizer is about okay and if you now look to these two to this term over here and this two term more carefully we can see that that's actually something we call the motion model it tells us how does a system evolve from time step T minus one to T given an executed command with the motion model or the process model and we have the correction step which tells you what's the likel of an observation given the system is in a certain State given I know the state what's the likelihood of that observation and this is often called the sensor model or the observation model and we briefly talked about those models before I will dive a little bit more into the detail or give a few example of how those model these models could look like um so that you kind of understand how we use the sensor model and the motion model in the course uh during this course uh in order to estimate the belief XT so in the context of Slam XT here would be the for example the position of the robot and the location of all landmarks or whatever map representation I use if I would use localization only this would be just the position of the pose of the robot and it's important to note that the base filter is just kind of a general framework for doing a recursive State estimation but it doesn't tell us which Technique we should use to actually compute those integrals what kind of underlying assumptions we we should make about the distribution for example if I say um I use I want to use only gaussian distributions because I know something about the estimate uh the quantity I want to estimate then it make sense tend to use a particular variant of the base filter which is explicitly made for gaussian distributions because it will be more um efficient and more effective than a filter which takes into account General distributions if I know that everything is gaussian or how the um motion model or the observation looks like are these can I express these guys as linear functions or nonlinear functions all these U the properties of the system um influence what kind of implementation of base filter I actually need and so there are different ways large number of different ways you will mainly look into two variants of the uh base filter the one is the common filter family and the other one is a particle filter uh family so it's family because there are more than one algorithm there's more than one common filter algorithm there's more than one particle filter algorithm and they mainly differ in the models they allow to allow us to use in the terms of linear MO model nonlinear motion model um the same for the uh for the sensor model or what kind of underlying assumptions they do about the distribution so are they gaussian are they non- gaussian do I use a parametric form like a gaussian distribution which I can specify with the mean and the variance or are these nonparametric distributions where there's no kind of closed form uh or no kind of um a way to describe this function with a limited number of parameters so I may need sample or sampling techniques in order to estimate the uh or represent the full distribution so as I said we will look into common filters common filters require gaussian distributions and they require also linear or linearized um motion models and observation models and we will look into particle filters which is a nonparametric way which allows us to have more or less arbitrary models so in this sense this is can see that is a more General filter can handle cases that the Caron filter cannot handle if however the world is in line with the assumption that the cman filter does the Calon filter is a better estimator so there's kind of there's no free lunge um either it depends really on on what your problem is if you have if you really have a problem that fits the Assumption of the calman filter that's the optimal estimator you can't do better but if you have whatever highly nonlinear motion and motion with a mobile robot is often nonlinear because there are angles involved so it means you have S and cosine functions in there which are nonlinear functions so the Assumption of having a linear models is often violated depending on how your system is built but often it is violated and therefore Caron particle filters often kind of more robust because they can handle those cases better but come often at an increased competitional cost okay so let's look into those models and we'll actually look into common filters and particle filters in more detail in this course we'll start with the calman filter um today of course it's the first Technique we start with um so this was just kind of the general based filter framework and in the next lecture um we will actually dive into the details into the specific implementations of those filters okay let's have a look to the motion model again so there was kind of um how do I estimate the current state of the system if I know the previous one and the command which was executed so let's look into robot motion uh um so if we have a robot that moves around in the environment the motion is inherently uncertain because the robot typically makes mistakes if I tell my motor my robot go meter forward it will actually give power to the motors let the motors drive and if they think okay now I made a certain number of rotations I will stop that now because that's a meter but quite likely it's a meter and whatever 2 cm or 99 CM so it's quite like quite like that we don't end up exactly in the command we executed and the question is how can we model this motion just to give you an example so this is a trajectory of a robot through a maze environment and this is kind of the corrected trajectory where the robot was driving um and so it kind of it was started over here and went down here goes here through that Maze and um if you just take into it to integrate our odometry information so counting the revolution of the wheels and saying assuming that there's no uncertainty we'll actually end up with a trajectory which looks like this so you can see here the system has a slight drift to the right and going down here so it goes slightly to the right you can see again a slight drift to the right it's very small but it accumulate accumulates over time so if I just integrate the trajectory of the system into actually look like this although you can say it say kind of a similar shape it's definitely not the right trajectory that the robot took so how can we specify that how do we go from the State X to State X prime or from XT minus one to XT given a command and in robotics we typically have two different models that we use at least in kind of wield robotics that's mainly what we focus in here um in we have two different kind of models uh that we often use the one is the so-called odometry based model and odometry assumes we have wheel encoders so we have something attached to the wheels which count the revolution of the wheels and this gives me a pretty good estimate where I where the robot is going why it may be inaccurate because maybe one wheel is slightly bigger than the other one let's say our air Wheels with air pressure let's say the pressure in one wheel is slightly smaller than the other wheel this will turn to will result in slight drift or you're driving on uneven ground whatever it is so it introduces some small errors but the odometry model is kind of typically the the the easier to handle model the other model is the velocity based model the velocity based model assumes that we don't have those encoders so we can't count the revolution of the wheels but I I I send velocity commands to the system and then I will actually assume that the system followed this velocity commands so these things are more likely to be used in for example flying vehicles where it's pretty difficult to use an encoder because the things are flying in the air or if you have human robots which are walking or um robots with lags they may have encoders in their joints but you never know how big the stab is that the system is currently taking by for example walking and those systems often um use velocity or velocity uh like model so if we have odometry encoders Let's Lose we should use the odometry model and that's actually in most wheel robots the case or we have to stick with the velocity based model the odometry based model I briefly shown last uh week so we assume this is a system at the point in time whatever T minus one or T and we go to T or t+ one depending on my notation and the question is where do I will end up and we can express this motion with which is a robot which lives in a 2d plane so it has a x coordinate y-coordinate and an orientation Theta and it can only drive along its orientation then we can express this by one rotation so this should be the heading of the robot so this is the first rotation then by going along a straight line to this new pose and then doing a second rotation that's one way for describing that there are other ways for describing that I could also say okay the robot goes forward with its current orientation so it would go down somewhere here then it moves to the side and then it rotates it's something which is called the forward sideward rotate model and this is kind of rotation translation rotation model there different ways for expressing that but that's kind of one of the commonly used ones and given I have this pose over here given I have this pose over here I can actually compute those parameters that's why you can see here but I can also do it the other way around if I have um the pose I can execute first rotation translate the system forward and then do a second rotation then I can compute this pose over here so whatever I have I can compute the other one and the MTH here is actually not too difficult so um the distance between the centers of rotation of the robots is just the ukian distance between this guy here over this and this guy over here this gives me this distance and um the first rotation is given by um kind of the the Aon arus tangin um between those two poses which is kind of if you put that triangle over here it's kind of this angle of the triangle this it first part over here so this part and then we have to substract the orientation that the system currently has it's just kind of first kind of we can see that first rotate to the in the direction of the x-axis and then rotate to this orientation move forward and then do the final rotation that's that's that's missing if we want to introduce noise or we need to typically make an assumption what kind of noise um is introduced to that system and the standard choice in here is to assume a gaussian error in these individual three components so I say while doing the first rotation the robot makes an error while traveling forward the robot makes an error and while doing the second rotation the robot makes an error if we have that we can express it this way say okay this is an error we assume it to be gaussian there's an error while going forward this is assumed to be gaussian and we have an um error in this second rotation over here so we assume we have an gaussian error on the odometry command the command would be here um first rotation translation second rotation this however doesn't lead to a gaussian error or gaussian believe about whe where the system is in this state given this state because we have nonlinear function in here so if we can kind of show how this looks like for different noise parameters I can get different kinds of distribution so kind of the darker it is over here the more likely the state and you can see here this is a system which has an error in the translation and in the rotation in contrast to this case distribution looks like this it typically means the system is actually pretty good in in in rotating but um the translation uh is very noisy so kind of the the the main uncertainty is distributed along this line and in this case robot can actually go forward pretty accurately but has a quite large error in its rotational component and so the individual terms are gussan distributed that's at least the typical assumption which is done and then kind of this noise is propagated through the nonlinear functions like s and cosine which result from the standard motion equations and this lead to these type of what we call typically call banana shaped distributions this is just expressed kind of as a as a histogram and this is kind of a sampled representation way to if we kind of start here with whatever 1,000 samples propagate all those samples with additional Mis noise terms according to a gaussian distribution then you will actually end up with getting these distributions that's kind of the odometry model that's kind of the standard model that we will use in most cases in this course there's a second um model which is SOC called velocity based model and here we assume that the motion command that I sent to the robot so the motion command U over here is given by two velocities a translational velocity and a rotational velocity which I can command to the system so that means given my curent state which I'm when I'm sitting over here and I execute a translation rotational velocity I will actually drive along that Arc and as a result of the standard motion equations if I assume that for a certain time interval the translational and the rotation velocity is constant just for very short time intervals for this time interval theot actually drives on a Circ on a circular Arc so on a circle or on an arc because it's not a full circle and if and then I the typical model assumes that you kind of only have a discrete number of possibilities to change the translation and the rotation velocity like whatever the clock size with clock time with which you can send commands to your the hardware let's say you can do that 500 times per second so one of these time intervals where the velocity is assumed to be constant is 2 milliseconds so you have very very short uh a very very small number of circular arcs that you attach of course this is not exactly the case because the system the hardware typically doesn't instantly execute the new velocity so there's some ramping that kind of velocity slowly increases and then stops at the desired speed but that's something we assume in we ignore in these models over here if we do that from the basic motion equation so it's kind of twice in integration um we actually can end up with this equation so that's a new state given the Old State and um the what what sits in here is kind of the let's see orientation of the system this is a translation velocity rotational velocity and this is the the the delta T the time interval and this gives us a motion on a circular Arc if you now look to this kind of motion model velocity based model in compared to the odometry based model um do you see any structural difference something which is a little bit weird maybe yes please sorry no no I maybe you're on the right track just repeat that I couldn't hear velocity model we didn't have orientation um so we have an orientation in here this is the uh the current orientation of the system um and also in the velocity we have a rotation velocity over here translation rot velocity but you you've been pretty close to the real issue I wanted to point out no that's that should be correct so this is only holds for the case have to say that where um the uh rotational velocity is not equal to zero so you get a different equation if the rotation velocity is zero because then you have a straight line which is kind of um a circular Arc with an infinite infinite infinite radius and then the equation looks different and then you have the cosine in X and the s in y but that's the correct um the correct result that you get for the case when Omega is une equal to zero so the this give you another hint hint so if you compare the two velocity commands in the first case it was a first rotation a translation a second rotation and in this case we have a translational velocity and a rotational velocity what's weird about that yeah just guess we just have uh like one information about the changing um rotation yeah exactly and so we end up in that point we have uh just information about our that's the case so we need so the thing is the ones is kind of um two degree of Freedom so we have two parameters one of three parameters and if you have a robot let's say you know we in the so we are currently in that state over here looking in this direction if I set the translation and the rotational velocity we said the robot will move on circular arcs depending on depending on the translation the ratio of their translation rotation velocity so if I end up at one of those points over here I'm constrained in where the robot looks to so if it end up in this point the robot must look in this direction what happens however if I want to express a motion where the robot sitting here and should be sitting here so there's no circular Arc which connects those or there's no the problem is if I just go on a circular Arc in this point the orientation is a different one so I need to have an additional term which accounts for this kind of final rotation because I simply need a third uh parameter to have this transformation in this threedimensional space uh which we're living in so the moves on a circle or circular Arc actually um and this circle constraints a final orientation and how we can fix this is actually adding an additional parameter which we add to the final rotation and so if we look to our equations we just add an additional term over here which tells us um how much we kind of rotate in the end in the final rotation and if you now look to the velocity based models that we actually obtain over here they actually look pretty similar from the distributions as the uh odometry based models and this actually the case they are quite similar the only diff the main difference is that um the estimates of this velocity command commands are typically much more noisy than the information we get from odometry and therefore the odometry system is to be more accurate because the information we have kind of the the counting of the Revolutions of the wheels gives us a more precise information than the executed velocities but the Motions that you can see here look very very similar to the Motions we have seen before okay is at that point any question about these motions about these motions models okay perfect so let's not let's look to the next model the sensor model this was a second important term that we are using here and this sensor model obviously depends on your sensor so a laser rang finder which is a sensor which gives you the distance to the closest obstacle in a certain direction um is something completely different than a camera for example or whatever a radar so the sensor model strongly depends on the sensor we are using in the beginning of the course we will look here into laser based systems so we assume we have a laser range finder and we can measure in a certain angle the distance to the closest obstacle and therefore the models I present here are models for laser range finders um as a course evolves we'll also look into different sensors um we have bearing only sensors which only measure the orientation just cameras for example and and depending how much time we have I would also look into kind of connect based sensors which gives us an image as well as a depth information 3D information which is kind of kind of new in robotics or started over the past few years and but changed the field in robotics quite substantially therefore it's kind of worse at least reporting a little bit about the how to use this sensor in order to do to do mapping and build Maps but for the beginning now for today we assume we have a laser range finder and this laser range finder um typically has um a mirror which is rotating kind of 45 uh degree mirror and it rotates like this there's kind of a laser sitting over here so the laser is reflected by the mirror and then it's a time of flight sensor which measures the time it needs to send out the signal and receive the signal back because there's also receiver sitting in here and then this mirror is rotating so I get at very short time intervals um proximity measurements at different um angular orientation saw for example Starts Here is measurement number 1 2 3 4 5 6 7 to 180 for example okay and these are kind of measurements so a sensor scan ZT consists of K beams we call beam measurements these are kind of the individual measurements um of sending out laser pulse and waiting um until we receive that pulse and um so every measurement is kind of an in this case k dimensional Vector of proximity measurements and by knowing the position of the mirror I can estimate which um in which orientation this laser beam was sent out and a standard assumption that's what most models do in robotics they assume that these beams are independent of each other whatever it means um if I know what the environment looks like and if I know where the sensor is what I measure in in this direction is independent from what I measure in this direction again this is an assumption I don't I don't state that this is exactly like that actually there are you can actually show that this is not exactly the case there is a dependency um but this is kind of a standard assumption that is done that means the probability distribution about over the whole scan so that t which consists of K beams is the product of the probability distribution of the individual beams and then this model here just looks in the um what the probability of measuring something in One Direction given I know where I am and given I know what the world looks like and there are different ways now for describing this quantity so it's kind of the beam based model the most simplest example of those models is the so-called um beam endpoint model um if you have if this is what your environment looks like this is where the robot uh the position of the robot this is the measurement it takes so it gots kind of measured this distance in this direction then the Point model says okay I'm ignoring what what the map information along that um along that line I'm just look to the to the end point of that beam therefore it's Al called beam endpoint model and just look how far is this point away from the closest obstacle so here there's actually no obstacle in the surrounding so this measurement should have a very very low likelihood um if the beam would end up here here an obstacle is actually really close nearby that would get a much higher value and the reason why people use this model although it kind of it's um from the physics point of view it's it's it's stupid because there may be a wall over here where the beam can never pass through so checking what if there's an obstacle in the map at this this position is completely nonsense because the beam cannot pass through that wall so from a physics perspective this model it's not really a good idea but it works surprisingly well in practice this is one thing and the second thing is that um it's extremely efficient to compute and that's the main reason why people started using it because what you would need to what you need to do is if you have your map you just kind of need to expand your obstacles a little bit or kind of it's actually it's a convolution so it's like um a gaussian blur around your obstacles so you kind of expand these obstacles um with the D with the decreasing function so for example if you have a 1D map um so this is X um and let's say this part is occupied so this is occupied with probability one occupied with probability zero that's kind of the standard map if you convolve it or expand it can you may have gaussian function with a gaussian kernel which looks like this and you get this value so the the further you away from the obstacle the smaller the value the closer you are the larger the value and you can compute this very very effectively and then the only thing you need to do you just need to look up in your array at the computed endpoint and take out this value and you're done so it's very efficient it's just a look up in a map um so that's typical uh map how it looks like so this is the the regular map so white is free space um black is obstacles it's typical occupancy grip map that people use you can turn that into this So-Cal likelihood field um so the the brighter the values the higher the probability um and then you if if a beam ends for example up in this wi area this corresponds to that wall you will get a high value and here in the middle you get a low low value so it's kind of standard the standard beam endpoint model very simple to compute very efficient um but yeah physically not perfectly motivated um there's another model which is a ray cast model which is also frequently used it's more expensive to compute um but it's physically more accurate um so and here let's say I assume according to my map here where this star is there's an obstacle so what's the likelihood that I measure uh a certain lengths given I know there's an obstacle let say 4 met away it turns out you get kind of this kind of funny looking distribution which is actually a sum of four different distributions which cover four typ typical effect that can occur again these are modeling design decisions modeling assumptions that we use kind of four distributions to model that but if you actually look to the data that you get you can actually see that this resembles the reality quite well this consists of kind of as I said four components the first component is kind of a gaussian distribution around the uh the the position of the real obstacle so if I'm the obstacle is according to my map 4 meter away I measure in this direction I will get a gaan distribution around 4 meter of what I measure which is kind of your measurement noise that your sensor has then there is a component which is which is an describes an exponential decay which is this part over here and this allows us to cover um Dynamic obstacles which walk around like people walking around in the environment other robots driving around and those Dynamic obstacles only affect the probability distribution up to the the point where I see the where the obstacle is because whatever Walks Behind my obstacle I don't care because um the obstacle will reflect the the sensor measurement it's kind of an exponential decay here this peak there in the back is something which is due to the physical limitations of your sensor that a sensor has a certain um maximum measurement range and it's called a max range reading so if you have if you sensor measures only 4 MERS and the obstacle is 5 MERS away you will get no information no return and this often the max range reading which is expressed like this it's kind of a truncated maximum value and then you have kind of uniform distribution um about the whole Space which is something something I haven't covered yet some random effect something I really don't know what happened it's just kind of very small uniform distribution and then if you sum all these four up these four distributions you will end up with a system which looks like this again the frequently used model in robotics and it's kind of physically more plausible than the bpoint model it's more expensive to compute because you have to do ray casting operations in your map therefore it's also called Ray casting or rast model but it's physically more plausible so should actually give you better results and the final model I want to quickly look into is a model for perceiving landmarks distinct landmarks in the environment with range bearing sensor like a laser range finder which gives you a range in the bearing so kind of the orientation and the distance and um we're just looking into landmarks so something I can identify in the environment like typical landmarks used in robotics could be whatever Corners in indoor environments you extract corners from your range scan say okay that's a unique Landmark there's a corner and your map says okay at this position there's a corner and you can identify the corner in your range scan you know where the corners in your map and then you can actually compute what's the likelihood of measuring the corner um at that certain position and um the the Assumption so okay let's go to the details here R is the the the the range reading the pro the distance that was measured by the laser range scanner and fi here is the the um the orientation of the beam with respect to The Heading of the robot so because laser range finder has is rotating mirror so it measures uh in this direction this this this this this this this and this is expressed by this F over here we know that the robot is or we assume to know that the robot is XY Theta and we are observing some feature some Landmark J in the environment and the location according to the map is mjx mjy so the x and y coordinate of that landmark that can be the corner it could also be whatever the door frame if I'm my landmarks are door frames or if I'm outside outdoors and I um estimate the position of trees the trunk of the tree is a feature and then again what I should measure in terms of um distance it's just the distance of the feature and the robot in X and in y squar sumed up and taking the square root so this ukan distance between the position of the robot and where this this feature should be and the orientation is given by the uh Again by by the Aon function um minus the orientation of the robot so it's very similar to the rotation uh translation rotation model just that I observe only a point in space plus some noise this can be gaussian noise so that would be kind of the mean of what you expect to measure and again you have a measurement noise which is often assumed to be gaussian doesn't need to be gaussian um depending on the properties of your sensor bearing may be more accurate than range or the other way around that just depends on your sensor information but that's kind of the one of the easiest models you can use for proceeding landmarks so kind of three different things if you have kind of dense Maps then you can take for example here the beam endpoint model or this raycast model and if you work with landmarks that's kind of the standard model that you use and given those models you can realize um base filters for example to to perform localization to estimate where the robot is so we have this odometry based model we can estimate where the robot goes if executing command given I have a map of the environment let's say a map of landmarks and I know where those landmarks are I can say okay the robot is at a certain point it's a certain position what's the likelihood of observing whatever that tree which is there standing outside there in front of our building um at a distance of 20 M and at an orientation of whatever 20° if I have that information I can compute the observation likelihood or the observation model and from the odometry information I have the motion model and then I can just Implement a base filter and get a recursive estimate of where the robot is it's kind of localization we will do that for slam that's something we will start with next week so um to to sum that up what what this lecture should be about is was kind of a short repetition of the base filter actually a short derivation because the base filter itself is not that complicated that's really the full derivation of the base filter that's all the magic behind it that you have seen here in the first whatever 20 minutes of the lecture today and that the recursive framework for State estimation and um but it leaves you open a lot of design decisions how to implement your probability distributions what kind of models you allow for the only the only thing it requires you is to specify two models this is the motion model and the observation model course these are essential quantities in these a and then in the second part of this lecture quickly um went through typical motions and observation models so that was um the odometry based model velocity based model uh for the motion model and um a standard beam endpoint model raycast model uh for laser range finders as well as a model for perceiving landmarks in the environment if you want to know more about that or that was still not enough on the base filter itself there's a second chapter in the probabilistic robotics books which revisits this is kind of similar to what I've shown here and there's also chapter five in introduction to mobile robotics which is pretty similar to what we have shown here a little bit maybe took a little bit longer there to derive it provide a little bit more details but actually I think all the key Essence that you will need here um it was presented today and if you want to know more about obser motion and observation models there's again chapter five and six in the probalistic robotics book or chapter 6 and seven in the introduction to M robotic course taught last summer term and video recordings are available for these course as well so if you feel still uncomfortable you think you still miss something um it may be a good idea to either watch the videos of the course but they are not too dramatically different from what I told you because I was also involved in teaching this course um I then probably would recommend you to look into the probabilistic robotics book uh I just checked the library there should be 12 copies available so enough I guess for this course and um so you can actually revisit this concept um maybe the book explains it better than I did it I don't know um can judge yourself okay that's it for the first hour thanks so far and we will just continue in a few seconds with the extended common filter I just think we make a 5 minute break open the windows and then we can continue thanks
|
SLAM_Course_2013
|
SLAMCourse_18_Robust_LS_SLAM_201314_Cyrill_Stachniss.txt
|
then we are going to continue with the second part of the lecture today which focuses on the problem what actually happens if the gaussian assumption that i have about my constraints doesn't hold you can have multiple reasons why this doesn't hold it may be that you have a multi-modal observation you say either i measure the landmark is one meter away or it's 20 meter away i simply don't know exactly maybe because it's due to the data association ambiguity you simply do not know which feature is that i actually observed here or you already committed on a wrong data association say hey this is this feature although it's a completely different feature so you will get so called outlier observation which is far away from what the real observation would be and as you will see in some small examples having this outliers in your optimization problem is something which hurts dramatically which actually screw up your solution so already a few outliers can lead to a environment model which is completely unusable for doing any navigation task so where the geometry of what you computed doesn't fit to the real world geometry anymore and one of the questions actually how to handle that so as we said what we are doing here we are minimizing the sum of the squared errors terms and as we have seen so far this is the same or strongly related depending on how you formulate that to a maximum likelihood estimation in the gaussian case so if you're taking to account your prior and all the things correctly what you're doing you're estimating the mode of the gal of the high dimensional gaussian distribution about the poses of the robot and the landmarks so you're finding the mode of a gaussian the problem is if you have outliers so let's say i measure always let's say the distance to the wall this is kind of now one meter one meter 10 99 centimeters one meter one meter now an outlier 100 meters that's actually something which screws up my solution dramatically they get a completely wrong estimate on what the environment looks like and um the other thing is that we have we are completely restricted to gaussians even let's say maybe one meter away maybe 100 meters because there are two walls because let's say i'm standing at a corner so either my my laser beam hits one of those walls over there or goes out through the glass pane to the next building so i'm either measuring here two meters or measuring 200 meters it would be nice if i could say to the system as either 1 meter or 2 meter or 200 meter i simply don't know take into account a distribution which is not a gaussian with a single mode but why not taking account a multi-modal distribution so if we would have the possibility to integrate a multi-modal distribution here that would actually be a nice beneficiary and what i want to talk about here today is um ways for oops okay ways for doing that for being able to integrate first multimodal distributions and second having the ability to deal with outliers so given that a certain number of constraints are outliers how can we actually fix this problem so why is this a relevant problem or something which actually cures in real situations the first thing is places may look identical although they are not identical if we hear this lecture room and we're in the next lecture room next door they look very very similar it may be very hard for a robot to distinguish that we are here whatever in room 18 and all in room 16. so the robot may generate a constraint which says i'm here in room 16 although we're in room 18 word says i'm either 16 18 i simply don't know it's one of those two rooms other things is if you have structures in the environment and there's a lot of clutter in the scene the clutter even if it has a repetitive pattern may lead to a multimodal belief about what the relative transformation between two poses let's say or you walk along whatever a corridor with very little feature every few features you only have say some pillars which are standing around but the pillars show a repetitive pattern and you don't know exactly where you are so you get a multi-modal belief where you can be the other thing is gps can even be problematic if you have this called gps multi-pass problems you have reflections of the gps signal based on larger buildings so you may get beliefs or you may get outlier measurements and the question is how can we actually take that into account it's not always easy to get this information out of your gps because typically it runs the kalman filter internally or most of the gps devices do that so they get a gaussian belief is screwed up but if you get the raw measurements you may be able to do better by allowing for multimodal distributions in here so there's one small example so there's a small robot which moved through the 3d board and this is what was similar to this repetitive structure that i was talking before if you move through a corridor through what can be actually sit here or here or here and his local perception will actually match quite well so you may get one of those multimodal beliefs where you say i mean the mode is roughly where the mean of that gaussian would be if i would approximate this function by a gaussian but maybe i'm ending up here in another mode and that may be sub-optimal so having the ability to take into account multi-modal beliefs is actually helpful there's another real world example so this is the intel research lab data set that you have also experienced and if you look to those poses over here in the poses down here how those individual structures match um if you just apply let's say scan alignment you may say this may match so maybe someone has opened the door which was closed before or here is a door now closed which was open all the other scans map actually quite well the same holds here so even as a humanism you say okay there is definitely a misalignment between the skin so they don't fit perfectly but that's something which actually can result from small changes in the environment the person walking by maybe closing or opening a door they don't get the perfect you simply don't get the perfect alignment all the time and this can lead to a wrong consideration hey what i see here looks exactly what i see there so i create a constraint here in this case between those nodes so here two constraints are generated although they kind of are swapped and this will kind of distort the whole map so this is how it looks like so this is all constraints are correct except this single red one over here so this is a single constraint you can already see that you don't have a straight wall over here anymore so it's kind of bended a little bit like this due to the single constraint which obviously has a really really large error so the least square error minimization at the error is squared error term tries actually to minimize that if we add i think whatever and two three four five i think there were 10 constraints 10 wrong constraints the map actually gets so distorted that is unusable for navigation at least down here here you still may be able to navigate within with an autonomous system but if you have 100 out construct constraints you end up in situations like this and so the original number of constraints i don't know how large it was but i think it's around three thousand something like this along those lines so the number of constraints that system generates is around three thousand if you put already ten wrong ones in there it's quite likely to screw up it was one hundred which is still a very small number compared to 3000 or just a small fraction it will actually end up in dramatic mapping errors so the system is unusable so having good data cessations is really important so already screwing up a small number of places is something which can hurt your optimization if you don't take that into account what i would like to talk about here today is how can we take this information actually into account so how can we deal with the problem that we have places which look identical we have cluttered scenes uh we have may have gps multi-pass problems so the signal gets reflected for example at tall buildings and in this case screwed up the measurement how can we incorporate that into the graph based slam approach um so the problem that we actually have is if we look to our um an individual constraint so the lack of an observation given the given the current configuration of the nodes was a gaussian distribution so that's what we had before if we now say okay we would like to have simply a sum of gaussians in order to have a multimodal constraint so some of those constraints are multimodal we simply have a sum of gaussians right that's the first attempt to to solve this problem this would be our probability distribution what is the problem with this probability distribution so i say okay some of my constraints are these multiple multimodal constraints i will simply go ahead and implement that what's the problem that you're going to experience if you make this is some so this is we know how to solve that right this is what you know how to solve that that's what we did so far let's say i want to go from here to here what's the problem that shows up if you want to do that if you do it exactly that way you start coding your stuff at some point i say hmm something doesn't work here what's what is that no it's not no it's not that's not about the dependency you're on the right track so the thing is we have a waiting term down here so we make sure that it sums up to one so we assume we have normalized this the sum is a pretty good starting point for the for when searching for problems in this case what the problem with this sum so we don't have a single constraint we have a number of constraints right hundred thousand millions of constraints how do we combine those constraints if we minimize the squared error what are we actually doing if you're minimizing the squared error we are minimizing this expression over here and this is a lot likelihood of this expression what's the look like it of this expression the lock will sit here in front we can't move it inside so if we have this this number of constraints is the product of gaussian distributions if you compute the log likelihood this turns into a sum of the exponents exponents therefore we have these sum of the squared error terms we simply can't do that down here you still have the the log over the sum over the x of the individual terms so if we go to the log like negative log likelihood we're going to optimize here this term here minus a constant and here we can't go further than that that's a problem there's where it fails do you see what is the dirtiest way for you to fix this let's say you started implementing that you have your implementation is done you know he said oh damn i can't do that what would be the ugliest trick that you can do in order to make that work even worse no that's not quite what you're going to do i mean the sum is kind of the the the bad thing what can i do with the sum instead of the sum i can i can get rid of the sum in some nice way sorry oh the integral of some to the integral actually makes our life typically worth um so that's that's not going what's going to fly um you have a sum of gaussian so it's a weighted sum of gaussians what you have in here you want to get rid of that sum how could you do that okay so if we say okay i i have that sum of gaussians i simply ignore all of them except the most prominent one exactly you replace the sum by maximum operation we just take the one which currently seems to be the best one we may flip that during our optimization so the error term may commit to a different mode in every step of the operation of the of the optimization but the key trick is to simply say uh i just say where am right now is this actually the approximation error is actually kind of small if the gaussians are quite separated from each other the modes are quite far away from each other we say i might either measure 2 meters or 200 meters so either the 2 meters the current the dominant 1 or the 200 meters are the dominant one if one mode would be 2 meters and the other one 2 meter 5 then this may not be a good approximation but what we're going to do is we say okay just select the cave mode which is currently the best one so it's a maximization operation again so if those if the if the means of these gaussians are far apart from each other the approximation error isn't that big if they are near each other disaster or disaster but may have big errors and then because the nice thing is if you move the max operation in here if you're maximizing a function or you're maximizing a lock of that function is equivalent so we can move the lock into in there and then have uh have our problem solved that's kind of a nice thing so if you compare that so let's say we have two two possibilities here which may look like this and we go for the max mixture so taking exactly this expression over here this would be our distribution so it's always the max of these two functions therefore you have kind of this uh not those smooth position over here if you compare that to the sum of gauss it looks like this so the approximation error that we do is in reality we have this situation but we approximated by this situation if we enemy just looking for the mode maybe the approximation error is not too big i mean here the two gaussians actually kind of close to each other if they are further apart this value actually goes down nearly to zero and this value again nearly to zero so the approximation error isn't that huge actually and so this idea of this max mixture approach to actually change the optimization from a sum of gaussians to a max of gaussians so what we saw since we don't have the sum anymore we don't have we only have the mux in here and so then the the log if you go for for the log term i can move the log inside so when you see i just have this constant factor which i had before as well or negative log like i'm minimizing negative log likelihood this is exactly my expression so i stick with exactly the same operation here the only thing i need to do whenever i compute the error function i need to pick the mode of the gaussian which gives me the best performance actually a really nice way to handle this problem quite practical because if i need to integrate that into a current optimization system i can actually do that very very easily the only thing i need to do whenever i experience one of those multimodal constraints i don't compute the error i have to compute the error for every mode and simply select the one which has the best performance so it's just kind of a sum over the individual modes of that gaussian distributions and then i perform the optimization as i was doing that before so the rest stays exactly the same in my code the nice thing is that actually between iterations you can the system can swap between different modes and therefore although the optimization in one iteration takes into account only one mode of the gaussian as you can switch the modes it is you still have the ability to deal with multimodal constraints if i do that that's what the result looks like so this was the original stuff you have seen before one wrong constraint 10 and 100 if i go for max mixtures let's say either it's a perfect fit or it's a very very very flat gaussian this can be in layer or outlier so you have a gaussian distribution around the kind of if the constraint is somewhat in line with what you expect with somewhat in line with the with the current configuration of the graph but if it's far away it's a very very very flat gaussian so it doesn't really hurt a lot a lot if you do that so all these constraints are bimodal constraints one is the original constraint that you measured and the other one is a flat gaussian with a large uncertainty which is kind of in the system swaps to this other one if it is an outlier there's a very high likelihood this will swap to that unless it's an outlier which you have a bad initial guess and the bad initial guess is in line with the outlier then you may run into problems but if you do this you can actually add 1 10 100 constraints and those are constraints which simply swap to the other mode and don't harm the optimization much the solution may be still a little bit different than the one which is done completely without constraints because you still have this very very tiny error but it's actually kind of within the noise that you typically have in real those data sets so it's actually kind of a nice interesting way to to use this to deal with multi-modal constraints and even if you look to the to the runtime error if you do the red one are the multimodal ones and the other one the regular solution here with this telescopy composition you actually have a very similar performance in runtime so you don't increase the runtime a lot by just checking for those individual constraints so when you evaluate the error the evaluation of the error is somewhat more expensive because every constraint can have can be multimodal or bimodal in this case but the there's not no big difference in the operation of those systems when you um if you use the red or the or the or the blue plot this is exactly the trick that is used um so if you just want to deal with outliers so the the red one is kind of the inlier and the blue function is the one which is the outlier so this is a bimodal distribution for uh for it's bimodal distribution one for the inline and the other one for the outliers the red is in leia blue's outlier you can also handle those cases say i'm either there or there or somewhere else so you have a bimodal distribution mode one mode two or i have my blue curve which is again the outlier so i still can handle these multi modalities and be able to deal with outliers or we can even do things for dormitory if you say um you if you especially if you start up a robot and the ground is muddy the wheels may slip before you get grip and the and the robot starts starts driving if this is the case although you're executing command you're standing and then you start moving so you may get this kind of distribution so in most cases actually the vehicle executes what you tell the vehicle to do but in some cases simply doesn't move and you can even handle this in odometry constraints with these multi-modal constraints so this max mixture idea is actually a pretty easy idea pretty simple idea just reply funny no one has done that in robotics until recently a few years ago the first one it was actually edwin olsen and pratik um who came up with that particular with this idea of using max mixtures which because it's a very simple trick it's perfectly integrated into an existing system it has quite some nice properties and it's actually a smart idea so when reading this paper says why haven't i why was i not the person who had that idea but because it's actually really easy to do and uh this actually supports these optimization systems quite a lot so that's actually a nice thing so um another thing is it can handle both things at the same time data station errors as well as multimodal constraints you can argue that multimodal constraints in practice do not happen that often that really your measurement is really multimodal you have look for cases where your sensor property or then the reveals these properties but there are some situations where there's a case and you can actually do that flexibly in your system so the combination of outlier rejection and dealing with wrong data associations is actually kind of nice we also can do this obviously in 3d so this is again this data set with the sphere that we have seen before robot moving in a virtual sphere with constraints so this is gauss newton and this is the max mixture gauss newton and um so you can see here there's a non-perfect alignment in here because you don't see the regular structure this results from a single outlier constraint which i think connecting those nodes over here here you don't see that at all and it's increasing number of outliers to 10. it's hard to identify the sphere here and if he increases to 100 outliers this is just whatever a big mess whereas this one still is able to solve those things quite nicely so there's one way for dealing with these outliers there have been other techniques which have been proposed in order to deal with those outliers the second technique is dynamic covariance scaling which you can actually do so this was the this is the equation the minimization that we do in in the standard the standard squared error minimization as well as sum of squared errors and the key trick of dynamic covariance scaling is just replace this information matrix here with a variant which has a scaling factor so a constraint dependent scaling factor added to that and the key idea the intuition behind that is if i have a constraint which has a large error so where the um the current configuration is far away from what the constraint tells me just reduce the uh or increase the uncertainty that is associated to that so decrease um the the information matrix so scale down the information matrix so they can just a small change by saying the question is still how do we actually compute these so the main changes we go to this formulation we have the scaling factor over here and we need a good way to compute the scaling factor so how can we actually do that and there's actually closed form you can derive that under certain properties where you end up with this operation here um so this is just a free parameter that that's here and this is kind of the original error the chi square error so this was this term over here so what you do is you compute the original error then you compute this sij and just multiply your information matrix with this this leads to the case that constraints which are far away from what we expect have a smaller influence on the optimization so we can actually visualize this so what we what you see over here is um so this is the um the black curve is the the parabola which results from the squared arrow this is the value of this scaling function s and the red curve is kind of the scaled error so the taking the black curve multiplied with the corresponding blue value over here so in this area kind of the the core center of attraction both both perform equally well because there's no scaling involved but the further you move out the more the red curve gets scaled so it gets kind of fatter and fatter and better so and if you evaluate for one point let's say for this point over here that would be the original error term and this is the scaled error term which sits as over here so the the error is weighted down the further i'm away however we still have a linearization point so if i compute the jacobian over here i still have a jacobian which drags the system into the right direction so even if i initialize that quite far away i still get kind of pushed into the right direction if you have a small video where we see how that works exactly so if the error increases if you're further away from the mode this is what your error function looks like so the further away the more you down weight the influence of this constraint because you increase the uncertainty of this constraint through the gaussian distribution you can generate a flatter and flatter and flatter gaussian distribution the further the point is actually away there's also one technique which you can find which is also quite easy to implement because you just need to compute the scaling factor and multiply that with your information matrix for every constraint also something you can do quite efficiently okay if we look a little bit more into kind of what is there an overall framework which brings this together the standard optimization and what i've shown you here is that um the problem that we have in this gaussian distribution and this holds for the max mixture as well as for dcs is that kind of the tails of this gaussian distributions contain too few probability mass they're too close to zero that's very unlikely that you are really far away from the mode and therefore if you have one outlier which is really far away from the current estimate the whole mole is tracked in this direction that's kind of the problem that we have with this gaussian distribution um and therefore especially lar terms with large errors so if you have constraint which introduce large errors these are these outliers this can actually screw up the optimization um when computing the minimal error configuration so one way you can do is or fits into the framework of its so-called robust m estimators which intuitively say okay we don't assume a gaussian distribution we assume non-gaussian noise especially we have a bigger tails of this distribution and typically one uses the formulation that the the likelihood is again an exponential function minus and then we have this term rho of the error value over here and that what i'm going to minimize is actually this expression over here so um i have this function row and i can encode whatever kind of noise model i want in there it still depends on the error term and it's still an arc min operation over the sum of those terms so the log likelihood so this is um something which is a little bit problematic for or doesn't the max mixture framework doesn't perfectly fit in there because um we replaced this with this we have this still this mux or main operation in there um but you can actually show that dcs is a variant of that depending how you select your function or now you get different properties in the optimization so if you use um if r is just the quadratic function then we exactly have the original problem that's what we minimized x of minus uh squared error so if we have this one we have we examine exactly in the gaussian world and now there are different techniques how we can actually address that one thing is we could take simply the absolute value so we don't square it just take the absolute value of the error that's not the parabola that we have but the absolute function the absolute value of that function you could also use the so-called huberm estimator then the function is a parabola if the error is small so if the error is around the the minimum error configuration it's a parabola but outside it's just a linear function so you have the parabola in the middle and then a linear function outside that you can also say then at least if you have an outlier which is further away the error doesn't grow quadratically it just grows linearly so it's very popular function there are several other ways if you look to the literature there are a large number of those so-called kernels which are introduced in the system in order to um to get a better behavior or meet certain error characteristics that your system actually reveals so this is an example for for the huber cost function i said we have this parabola close to the um so in the uh around the the zero error configuration so if my the current configuration is close to zero i have the parabola and here it goes into the tooth so these straight lines and this is just an example of different ones so this was the l1 norm we said before quadratic is just the quadratic function of the original that we have this is a huber cost function if other cost function which are again a kind of parabola here and then are flat over here um or which approach a flat line like the black system and kernel cushy corrupted gaussians so there are different ways of uh kind of row function that we can actually plug in there and um that we're then trying to minimize and so if you look to the max mixture approach it's actually kind of similar to this corrupted gaussian so if you plot both of them together so this is the the corrupted gaussian the blue one over here and the red one over here z max mixture i still have this max operation when i switch so it's kind of the max mixture for a bimodal distribution for dealing with outliers you still have this this this jump over here if you jump from the from one mode to the other which is here kind of non-smooth but in all other areas those two functions actually this if you compare their how how the how the um the kernel looks like this is actually pretty similar or the function that i'm going to optimize at the end that's actually they are actually both pretty similar and um the the frame actually also holds for the dynamic covariance scaling it can actually show that dynamic covariance scaling is a special especially of these robust m estimators so and in this robot's estimation the choice of this this function raw kind of encodes the noise properties that you expect if you take a quadratic function you live in the gaussian world and depending on what your noise properties do i have outliers and how many outliers do i have or what certain patterns do i need to respect when i do my optimization you can change your row function that means you're leaving the gaussian world and so the system optimized according to a different cost function but this allows you to take into account for example these these heavier tails so that outliers still are not weighted that dramatically and impact your solution so much because they completely disagree with the solution that you actually have before the um so the huber cost function some cost functions is often used this is the parabola close to the mean and the linear function outside and then there's max mixture which is quite similar to a corrupted gaussian it still has the advantage that it can can handle multimodal distribution which is which is kind of a very nice property a property that i like and for dynamic covariance scaling just as a node this is similar to a robot stem estimator of these families that we have seen it's actually an equivalent that you can show so actually to sum up we don't have to do a lot of really really bad things in order to deal with outliers of course we are leaving the gaussian gold if we do that and this simply means we have a different assumption about the noise that we're going to experience but i can quite nicely integrate that here or a lot of those kernels can be quite nicely integrated into this least squares framework where you just change your error function in a certain way so you still have your exponential function but you only change this row function in the exponent to still be able to optimize the log likelihood so this row function goes into the log likelihood but you still can do compute the logarithm which is something you need to do in order to come up with an effective minimization procedure i just picked out two examples here one was max mixture and this dynamic covariance scaling um which have been proposed in robotics but they are this robust m estimation is something which was not invented in robotics which um comes from different fields so from a numerical optimization or some from computer vision like the plexism and kernel was uh one of the kernels which which came from computer vision and they are quite attractive framework to deal with outliers and in most realistic data sets and situations when you deal with robotics there's a non-zero probability that there will be a datasization error in there and the more likely this data association error the the worse it gets if you optimize without taking these outliers into account and that's actually an easy way to integrate that and often actually requires not that many code changes so you can actually adapt a current system which cannot deal with outliers to systems that deal with outliers just picked out two examples here in terms of literature this is the max mixture approach by edwin olsen and particular for this robot's max mixture as well as the dynamic covariance scaling which is another alternative on how to weigh down the errors if you're far away from the current estimate and this is actually equivalent to performing a robust m estimation with a specific row function in there so that's kind of i know this was ford on the other hand um it wasn't too the idea should be clear to everyone what happens here that by changing this function you can't get much better behaviors kind of deciding which function to use for the underlying optimization problem is not on it's not always an easy and easy choice so this requires some expert knowledge some good intuition on coming up with the way with one of those functions but max makes her work it's kind of nice this dynamic covariance scaling has also been proven to work quite well and um but also the huber kernel some standard choice that you can select if you do this error minimization okay so that's it from my side for today with this lecture we actually finalize the kind of back end looking into back ends for slam systems and next week which is the last week of the term i will briefly talk about front ends and give kind of a short summary on what typical front ends exist obviously we're not going to all the details as we did that here uh for the back end this is also one of the reasons is the front end strongly depends on the sensors or the assumptions you do in there so generalization from one sensor setup to another sensor setup can be quite tricky on but the back end itself which sits here doesn't really change that much therefore the focus in this course which was much more on the back end and but at least i would tell you a little bit about what typical front desk exists and how you could realize a front end if you want to build a slam system well that's something we are going to do next week that's it from my side thank you very much and hope to see all of you next week thanks
|
MIT_CMS608_Game_Design_Spring_2014
|
32_Altering_Rules_Playtesting.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. [SIDE CONVERSATION] PROFESSOR: Before we go into the test, there's a couple of tips that I want to give you. Some stuff for you to take notes on. But you don't really have to do this today. At some point of time in your prototyping you might end up getting stuck. And you're not quite sure what to change. So if I would have been asking you to do things like amend rules, right? Replace rules with other kinds of rules, kill rules that aren't working. Some other ideas that you might want to play with, if there is a resource that's currently limited you might want to make it unlimited. And vice versa. And I'll say the number of steps that you can move, the number of currency that you have, make it unlimited or you make it limited. It could change the way how your game feels a lot. If you're in with other players. And maybe even sort of messing up the types of decisions they get to make, or maybe even the order that they get to make it. Could be something as simple as reversing the order of who goes next. But it could be more significant, like forcing another player to make a move that they don't want or taking away their options. You can mess with the play order. Like I said, you could reverse who goes next, but then also the order in which certain rules get played out. Could be something that you should be always trying to do with your own rules. Do your rules has to be executed in the same order? In fact, the answer is no. You can probably rearrange your rules and get a very different effect, even without writing a new rule. If you're going to deal with numbers at the prototyping phase, generally you don't want to be making small changes. You don't want to be making 10%, 20% changes to your numbers. You want to be doing something like changing things by 50%, maybe doubling it, maybe halving it. At least change of 50%, but multiplying and dividing by two. This will give you a much better sense of whether that change to the numbers is going to be the thing that's going to solve your problem. If you get the numbers wrong but the changes lead in the direction that you want, then you can dial back the scale of the change. Maybe you have a variable that was too drastic, but it started achieving the things that you wanted. And so, OK, all right. Maybe instead of halving that number would make it 75% of that number. As an exercise, once your prototype is already starting to work one trick of trying to simplify your prototype is to try to identify the very fewest rules that are necessary for your prototype to work. So you take all the rules out and introduce rules, then, one by one. The rules that you've already got. And you try figure out what's that bare minimum. That's finding the core, and that's the core of your prototype, that's the thing that you might want to carry into a larger game project in the future. Finally, throw away. In fact what I'm going to encourage you, at the end of this class today, is to make a completely new prototype from scratch. For each person on your team. But I would suggest as an exercise is just go back to the mechanic idea and see what else you can do with that mechanic. Whether it's stealing from other people or hidden information. All of your prototypes right now probably test one kind of way to interpret that mechanic. What else comes in completely different ways to interpret exactly the same mechanic? Make a prototype. You know, spend about half an hour to an hour. You only spent an hour on these prototype so far, and a lot of it was figuring out what mechanic you were working on. Right? So try to build something, test it with dormates or test it on your teammates. It would be great if you had a weekend meeting, for instance, to go into that meeting with as many different prototypes as you have team members. You just play each other's prototypes and have a conversation. These are all different ways to approach the same problem. So away what you got, basically, and make something new. And then revisit what you've got at a later stage. So that's some tips for you to do in the following week. But in the next hour this is what I'm going to be asking you to do, and that is to do a play test. Now, there are many different ways to do a play test. This is probably what they're going to be doing today. How many of you have made a game that's really only for one person right now? All right, everyone's made a multiplayer game. So you're probably going to do a play versus player test. That's fine, that's what we asked for in the assignment. You can do multiplayer tests for cooperative and competitive games. You can have other player tests where everybody gets the same set of rules or different sets of rules. So they're symmetric or asymmetric. The trick with multiplayer play test is that they tend to be very, very loose. It's very hard to do, say, a really, really tightly controlled experiment. One thing that often happens is that when you have people playing a game and later on in the semester you're going to just give them a set of rules and they're going to have to interpret it. They might interpret it in a way differently from how you intended. They may come up with-- they may negotiate the rules based on things that they don't quite understand from your rules and come up with a solution that wasn't necessarily what you expected. Communicate openly. They'll talk about, like, hey you know is that rule-- do you really want to move there? You know? And things like that. One thing that you can do is to have the person who is actually running the playtest, someone from your own team where you're trying to get information, you will explain the rules to people at the beginning, in early playtest. In later playtests we just hand them rules. But if they come up with house rules, if they decide to interpret your rules in a certain way, I would actually suggest not stopping them right away. Let them play out a little bet. Because they're giving you a free iteration of your game that you may not necessarily have considered, and may end up working. It might work out great. But then you cannot-- But once they play a little bit of that and then you get a sense of how that works, take note of that, make sure that you know how that is supposed to work out. And then explain the rules. No, actually. I'd like you to try playing it with the rules interpreted this way, which is the way how you originally designed. And then you can get a second player test of the same group of people. Let's see, what else. This is absolutely necessary for multiplayer games. You are going to have to do that. Even if you make-- If you go to other classes and make prototypes for digital games on paper, you are going to need to do this sort of playtest. So you're going to get some experience doing that right now. I do want you to know that there are a couple of other ways to do playtests. The Wizard of Oz test where somebody-- the person that you're inviting to play test is basically playing with somebody who's on your design team, and that person is playing the computer. This is very good for prototyping a digital game, especially a single player digital game. You can be very constrained on what information you provide the player. In fact, you don't necessarily even need to provide a player a full set of rules. You can just say, this is the computer screen, your finger is the mouse, or something like that. Or even giving them a simplified keyboard. Up, down, left, right or something like that. That's how you communicate with the computer, you just push, or you point, you click by touching on the screen. The trick is to make sure that the player is-- that you're not giving clues to the player about what the computer is thinking. The computer should be communicating primarily through the things that a computer will show you. Either images, sounds, maybe numbers that change. But the computer doesn't actively explain the rules. If the game was supposed to explain the rules, you should have little prototype pieces of paper that you can hand out in front of the player. Now say the player is completely confused and you think, this would be a good time to introduce a tool tip. Oh crap, I actually didn't have a tool tip prepared ahead of time. Well, grab a Post-it pad and write it down, slap it in front of the player. Now that's part of your design. So you can be-- this can be a very, very deep prototype. You can actually test a lot of different things in a computer game on paper this way, depending on how much time you want to go into producing something to be tested. You don't to spend too much time because, remember, prototypes are supposed to be disposable. They're supposed to be fast and cheap. So it's a Wizard of Oz because it's like the Wizard of Oz. It's somebody behind the scenes, manipulating what you're seeing. But there's no actual computer there, it's just you. You have to be very, very constrained yourself when you're running a test like this. You don't want to deviate from the algorithm that you set up for your computer, if at all possible. So if you've got some sort of AI character, you shouldn't be making human-like decisions for that AI. You should come up with a bunch of rules for that. This is another kind of prototype that you can do. Not for today's exercise, but later on in the semester. If you decide to do a live action game or if your games really about just a person walking around in a space, even if it's going to be like a top down board game in the end. You could just do a live action game where you just have people actually walking around a real space. You explain the rules to everybody who's involved. Maybe your game's for five people and you explain the rules to all five people before the game, and then you let them go. The problem, of course, is that they're all going to interpret those rules their own way. And because they may be physically separated, they may not be negotiating with each other and as a result may be not even playing with the same set of rules. That's also a problem. It's difficult to monitor what's happening with a lot of people walking around a space and doing their own things, like individual agents. But it's really, really fast to prototype because you may not even need to write anything. You may not even need to produce any sheets of paper, you just tell people here's a bunch of rules. It's like setting up a schoolyard game. And so if your games about, here's a character moving around a space or if you're of course trying to actually make a live action then this is something that you can also do. This is just as valid a prototype that anything that you're doing with paper and cardboard. So here are the rules. There's going to be people who are going to play. I think we have five teams. One, two, three, four, five. Right? So that's a little tricky. How many of you have two player games? AUDIENCE: Ours has two to four. PROFESSOR: Two to four. How many of you have three player games? One, two-- three to four. Better with four. And how many players? Four. Yours is a four player game. OK. So there's three, three. Yours is three to four. AUDIENCE: Do you want me to grab Sarah? PROFESSOR: Oh, actually. That's a good idea. OK, all right. So we're going to grab one more MIT Game Lab staff member to come in and will help test whatever team is not getting a chance-- that doesn't have a team available to test. We'll probably jump in on this game. So can I just check the two teams on my right. You're both three person games? All right, you test each others games. And then this group here, the two to four player games will test each other's games. OK? So you're going to test one game. [SIDE CONVERSATION] So, to clarify. You're going to test one game, and they are going to shift over and test the other teams games. We're not going to do it simultaneously. OK? People, I'm not done yet. I'm not done yet. OK. So we're not going to test games simultaneously. You're going to actually-- we're going to test one team's game at a time. So, say, the two groups on my left. If you are testing your four player games, we're going to play one groups four player game first. Then everyone's going to move over and play the other game. And the reason for that is because the team that designed the game, even though they're not playing the game, has a job to do. Someone's got to be the facilitator, sometimes. In a Wizard of Oz test that person is often also the computer. and that person's got the job of explaining the rules, right? Everyone else has got to be an observer. You should have a notepad, you should have something that allows you to take down notes really, really fast. You should be like writing down everything you can about what you're observing. In particular, keep an eye on the faces of the people who are playing the game. It's too easy to just get mired and try to record everything about game state. It's more interesting to-- it's more important to also pay attention to whether they're engaged or whether they get confused, whether they're being frustrated, or whether a particular game interaction-- what's interesting to them. You can tell a lot just by looking at someone's face. So, the observers and the facilitator, once you're done explaining the rules really shouldn't be talking all that much. In fact, once you get onto the later stages of prototyping, when you have written rules, you shouldn't be talking at all. You should be leaving it to the players to actually figure out the rules on their own. And the reason for that is you don't want to bias your results. You don't want to accidentally suggest good strategies, for instance. Saying that no, you really want to be moving this first. Well, maybe you should make that a rule instead of making it a suggested strategy. So take that out-- Once you're taking notes, practice being as quiet as possible. This is a small room, it's very resonant. It gets pretty loud. You can have a discussion after class is over or after the two play tests are over about what you recorded and we can discuss what you're going to do with that information. OK? So I think the three of us will be playing this game. And you'll need to explain to us the rules and take down notes. All right? If you need note taking material, there's plenty of pads. [SIDE CONVERSATION]
|
MIT_CMS608_Game_Design_Spring_2014
|
9_Randomness_and_Player_Choice.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So let's talk about the reading, which was about randomness and player choice. Everyone has an opinion about randomness and luck, right? It started off the reading about that kind of thing. So just to start things off, what are your opinions about games with luck? Do you like them? Do you not like them? And what do you like about games that have a luck element? AUDIENCE: I personally love a mix of skill and luck. So my favorite example would be poker. So you could pick up the game really quick, and you could be a novice, and you could win if you get lucky, and it makes you feel good. But in the long run, over a hundred thousand hands or more, it's the person with more skill that will end up winning in the end. So I like that dynamic of being easily accessible, but also having the skill element. PROFESSOR: OK, so that accessibility curve, that little bit of difficulty curve going in there as well. Yeah? AUDIENCE: I would say that a lack of predictability is good, and luck isn't, but but luck is necessary to achieve unpredictability sometimes. PROFESSOR: OK, how do you mean? AUDIENCE: So basically, you need luck that the game won't go the same way every time, not because you actually-- the randomness-- on portions, it has to be random, but that's the only way to make sure that different games take different paths. PROFESSOR: OK. Yeah? AUDIENCE: I like the unknowability aspect of it. I agree with [? John ?] that I like a balance between luck and skill. And what I was thinking of was something like Catan, where you have this whole plan that you're trying to work out-- OK, I'm building my city, and all I need is an ore, and I just need a [INAUDIBLE], and then I can build my city and do all this stuff. But you just have to wait for it and plan around, OK, what if I don't get what I need? Things like that. PROFESSOR: Yeah. Yeah. AUDIENCE: I also think this is incredibly subjective. PROFESSOR: Absolutely. AUDIENCE: So [? John ?] mentioned that he really likes poker. He likes [INAUDIBLE]. Personally, I like games with a lot less luck than poker, but that still have a little bit. But then again, I know tons of people who are like, oh, my favorite game is pokeno. Sorry, which is all luck, essentially. And you feel like you're doing stuff, you do have decisions about which piece to move. But in the end, it's essentially 90% luck. AUDIENCE: Could I comment? PROFESSOR: Please. AUDIENCE: Can I go on what you're saying, Ben? AUDIENCE: Yeah. AUDIENCE: Even if you take a classic, 100% skill-based game like chess, for instance, in a professional tournament setting, there's still luck. There's a luck element to who you're playing, or whether you get white and black, or whether they're having a good day or a bad day. I think there's always luck in games, even if it's 100% skill-based. PROFESSOR: Does anybody know how white's determined when you start a chess tournament? AUDIENCE: Usually based on your rating. And they also, in a fair tournament, will play a match where each player plays three games of white and three games of black. PROFESSOR: Yeah. So they're trying to work as much of that luck out of the system. But you're right-- there is some, a little bit. And there's reasons for doing that. AUDIENCE: I would say chess is-- I mean, chess [INAUDIBLE] is sort of more predictable than other games [INAUDIBLE]. AUDIENCE: [INAUDIBLE] always going to be [INAUDIBLE]. AUDIENCE: Yeah, chess is more predictable. I mean, I don't think what I'm [INAUDIBLE] AUDIENCE: So it's like, for example, think of round robin tournament play, where the person you go up against depends on, to a large extent, how you've done in a tournament. But that person who you go up against is basically chosen from a pool of people who have scored pretty well. So there could be a grandmaster in that pool that gets selected, or there could be a weaker expert player. AUDIENCE: A lot it depends on what you consider part of the game, also. PROFESSOR: Yeah, exactly right. AUDIENCE: You could say the selection process. You could also say, oh, I got lucky that my opponent didn't sleep last night. Is that really part of the game, or is it just that your opponent was a weaker player on that day? PROFESSOR: You have some more, [? John? ?] AUDIENCE: [INAUDIBLE] what I wanted to say is, was that, back to the question, with [INAUDIBLE] in terms of games, I prefer games which have skill at certain game elements, but it's not so much that they don't have luck as you don't have to deal with the luck instantaneously. Like, for example, the board setup at the beginning Catan. Or when you have, say, a deck where the top three cards are the cards you're choosing out of, or something, where it's randomized. And so every time, it's going to be different, which is the good part. And yeah, it might favor one person over another, because of luck, which is that part. But on the other hand, you still have time to deal with it with your own strategy. PROFESSOR: I'd say that probably colors into that chess, too. The setup of the board is a lot like the framing of a tournament, or the context the tournament is played in, the context of the players. AUDIENCE: So I would say that whether or not a game based on luck on chance-- it's also dependent on what my purpose for playing that game is. PROFESSOR: Absolutely. AUDIENCE: So I really like sometimes playing chance-based games, and I've gone home for break, and I'm going to see my brother. And I haven't talked to him, and I haven't been with him in months, so we can play games. It's not really about the game. The game is just to do something while we talk about what we've been up to. Whereas once I'm home for a really long time, we'll play a skill-based game. And it's really all about the two of us competing at the game [INAUDIBLE]. So it's also not necessarily subjective by person, but by situation. PROFESSOR: Yeah, absolutely. Yeah? AUDIENCE: So in the game Dominion, there are basic [INAUDIBLE] cards all the time. And then there's a bunch of special sets of 10 cards, which [INAUDIBLE] game. The strategy [INAUDIBLE] the special 10 cards, and just buy-- and to do the optimal strategy on the stuff that's in every game, often you can win 30% to 40% of the time against the best possible strategy with [INAUDIBLE], even if you're using another set of cards, which is why-- which shows that-- But they're [INAUDIBLE], but at the same time, the game has a lot of strategy involved, despite the fact that it comes down to luck. And so when people are like, oh, this strategy is better than this strategy, because when we simulate 1,000 games, it wins 60%, 70% of the time. As opposed to actually playing the game and showing one game won. PROFESSOR: Yeah. Well, that's a huge thing, too, of how many times are you going to play this game, especially the games that you're making in your assignments? How often are these games played? Hopefully, you're testing as best you can, as much as you can. But I don't think you're going to get 1,000 plays out of them, right? Unless you're going to throw it through a computer. You don't have to. Please don't. Getting real people to play it is a huge component of it. Because there's something about luck-- I don't think we've talked about yet-- the ritual aspects and the performative aspects of luck and chance. Can anybody think of a game where there's something very particular to how luck feels there? Maybe you can take luck out of the system, but why you wouldn't want to? So I actually think about, in that kind of context-- but where you're talking about craps, you're talking about gambling games, the rituals that go around gambling games-- blowing on dice, rolling dice. What's craps if you remove the dice and replace it with some other kind of-- technically, getting to the exact same probability system, but different materials? Is that going to be a different game or not? AUDIENCE: Could you replace it with a spinner [INAUDIBLE]? PROFESSOR: Yeah. Is the probability curve the same? AUDIENCE: No. PROFESSOR: No, but you probably could. AUDIENCE: Yeah, you could probably break it down into [INAUDIBLE] the same. PROFESSOR: Yeah. AUDIENCE: I'm thinking from a pro aspect, where not exactly like craps, where you're playing against the house, the house is always going to have the edge. But the poker aspect, where you're playing against one other. And yeah, [INAUDIBLE] if you're that much better than everyone else, then you can play professionally [INAUDIBLE]. In which case, you wouldn't want to take the luck element out of it, because your customers-- that is, the people you're playing against-- aren't going to want to play against you if they realize that they're so bad. But if they win every once in a while, and they feel like they're winning often, but not a lot-- yeah, [INAUDIBLE] You wouldn't want that [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] typically a large percentage of people that actually play poker now make money, even when include-- I believe it's somewhere between 1/5 and 1/2 of people will make money playing poker even without [INAUDIBLE]. AUDIENCE: I think it's a lot lower than that. It depends on how good the table is. PROFESSOR: Are both these cases you're talking about in casino play? AUDIENCE: I'm talking about [INAUDIBLE] poker table at [INAUDIBLE]. And that's including [INAUDIBLE]. AUDIENCE: Well, I'm not sure if you mean in that session, or over the long term. But I think [INAUDIBLE]. AUDIENCE: Even on [INAUDIBLE]? AUDIENCE: I think in long-term, probably 10% [INAUDIBLE]. AUDIENCE: 98% of poker players are long-term losers, [INAUDIBLE]. [LAUGHING] PROFESSOR: What website? AUDIENCE: I don't know. TexasHoldEm-Poker.com. PROFESSOR: Sounds incredibly authentic. AUDIENCE: No, yeah. [LAUGHING] PROFESSOR: Go ahead. AUDIENCE: So to your question about craps, my first, I guess, response-- my thought was that if you change the dice to a spinner, even if it has the same exact distribution, it seems like a different game. So I started thinking about it, and if you have a game like online craps, that's kind of what you did, except you just show them a picture of dice. And that feels like it's the same. So I guess I'm not sure. PROFESSOR: Yeah. AUDIENCE: Maybe new rituals would come up with the spinner. Like maybe if you're blowing on dice, you rub your hands or something. [INAUDIBLE] AUDIENCE: And the question of mechanic versus aesthetic. It's the exact same mechanic, right? You have the random number generator that somehow gets you money or loses you money, with very specific rules on how it works. Whether it's the die, or the spinner, or a computer, that's what the user's experiencing. But at the end of the day, it's just like Monopoly-- at the end of the day, you have a random number generator that's moving you around the board, and you collect stuff. How they skin Monopoly doesn't really matter-- it's still Monopoly. PROFESSOR: Exactly. And they showed it. AUDIENCE: I guess going off of the feel, and the fact that with dice, some people believe it takes some type of skill. They feel like they can roll 6's more often if they, I don't know-- PROFESSOR: There's a lot more tactility to dice. Like maybe if I'm just cupping it the right way, maybe if I put the die facing the right side up on the palm of my hand when I roll it, maybe there's going to be something there. Maybe I've got the skill there. Maybe it's just for the back of the house. Maybe it's for the casino sites-- who knows. But yeah, you're tricked into feeling like there's some choice that you're able to make to change the outcome. AUDIENCE: So to build off of [? Damon's ?] point, that helps clarify my idea on it. Because I think part of craps, then, is the aesthetic experience. And to me, it feels like the dice, or at least the idea of dice, is fundamental to craps. If you don't have it, then it ends up being a different game, because it's a different aesthetic. PROFESSOR: So going back towards the beginning of the reading, one aspect of this aspect the reading is framing itself in is target audience. Who is the target audience for the game you're making? And it starts off with Candyland. Candyland, the most random, the least predictable, the least amount of player choice that there is in a game. But because of the audience that it's being designed for, that's OK. That's actually beneficial in the examples they gave. So have you thought about the target audience for your own games right now? Do you have a clue about what that might be? Or the context that you were saying, the framing you might think-- I know with your game, at least, there's some kind of party atmosphere going on in it because of the physicality involved, right? Any others? OK, because one thing to think about, especially when you're coming to this from a designer [INAUDIBLE] and know the assumptions you have, and the personal likes and dislikes that you have. And you're going to be on a team, you're on a team of multiple people. You're all going to have different likes and dislikes. One thing to try to really figure out early on is who is the person playing the game, and what is their dislikes, and what are their likes? At this point, just assume. Just come up with a random target audience person, and just make that assumption about what they may or may not like. Because I know you're going to have discussions, arguments over which one is best or not. How much randomness are we going to include in this game or not? Are we going to include this mechanic over this mechanic or not? Rather than making it about what you personally like, maybe try taking it to what another person is going to like. How it's actually going to played. And you might not find that out until you're actually in the middle of play testing and you're getting it in front of other people. [SNEEZES] AUDIENCE: Bless you. PROFESSOR: Cool. So I want to talk about randomness some more later on. But first, let's spend about-- I think some of these games can be played in about an hour and a half, an hour? So we'll play some of these games until about 2:30 and talk a little bit after them. What we want to talk about as you're playing the game-- pay attention to how randomness is used in the game and how player choice is used in the game. They're all going to be pretty similar to each other, but with a different mix and a different balance. But similar types of mechanics are in each of these games, if I remember this correctly. So [? John, ?] you want to describe the games that you learned? AUDIENCE: So I checked out Race for the Galaxy. How many people have played this game? PROFESSOR: How many have played San Juan? OK. AUDIENCE: So I listened to the rules online a few times, but I didn't actually get a chance to play this game. But it's a game where you're trying to get victory points. And the main mechanic-- actually, there's really cool artwork, so I'd like to pass it around. PROFESSOR: [INAUDIBLE] think of the pieces in the game [INAUDIBLE]. AUDIENCE: [INAUDIBLE] AUDIENCE: But I think the unique part about this game is that there are phases, and each player selects what phase their setup phase is. And they each do different things. One phase lets you gain cards, another phase lets you bet cards. You can consume items, which gets you victory points. You can score planets, et cetera. And each person selects the phase that they want to play, and I think they get bonus for doing so. Everyone simultaneously plays the phase that they want, and everyone can take part in that phase. So seems like a really neat game. Maybe [INAUDIBLE] more granular aspects, and you can play around. Agricola-- I had the opportunity to play a few times, actually. And it's really fun. So check out the [INAUDIBLE] here. There's a lot going on, actually. There's a little-- what would you call this-- player board? PROFESSOR: Yeah, a player board. AUDIENCE: Player board. Everyone gets one of these. And then there's a common area as well. Multiple things, actually. PROFESSOR: Are you going to [INAUDIBLE]? AUDIENCE: Yeah. So there are these major improvements, which you can invest in, get victory points, and get power-ups eventually. There are different rounds to the game, and it's a worker placement game. So you start with a couple of workers, you're a family, and you're farming stuff, roughly. And you want to expand your farm. You can either build up your house, you could get livestock, or you can plow your fields. And there's different rounds, a finite amount of rounds. And resources respawn into this board every round, so it's actually quite cumbersome to re-up the resources every time. And the whole point about this game that I thought was really unique-- oh, you also get a limited amount of profession cards, and also minor improvements. And those are different every game. There's a pretty big stack of these profession and minor improvement cards, but you only get seven. So that's the hand that you have to deal with, and you don't ever get any more. Yeah, so there's different rounds. And you only have a couple of workers. And the only way you can expand your family over time-- you have to pick and choose what your strategy is going to be. You don't have time to do everything, because the rounds come really quick. There's also a harvest phase, which is essentially-- so during the other rounds, you're amassing resources. And a harvest phase, where you cull the resources out from your farm. And you definitely don't have time to do everything. But at the same time, the scoring, the points at the end, rewards you for that respawning. Another thing that I noticed with that-- the cards that you get dealt, some of them have early game benefits. For instance, if you play this game, wood is super awesome at the beginning of the game, so get the wood thing every time. But as you go on in the game, wood [INAUDIBLE]. So it was a fun dynamic that I was able to experience. PROFESSOR: Did it come with beginner's rules, or first-time rules? AUDIENCE: I think it's recommended for younger players. PROFESSOR: Yeah, I'd recommend playing those. You'll get the same feel, and you'll probably get a full play-- AUDIENCE: [INAUDIBLE]. PROFESSOR: You'll still get a full play experience, and you can play multiple games a day. AUDIENCE: There's lots of bits. If I had one criticism, there are so many bits, and it's really tedious to re-spawn all the resources. PROFESSOR: And the colors aren't all that different from each other, either. AUDIENCE: I saw some online where actually, the pigs are-- they just had little pink piggies, and that box would have been cool. There's also a bunch of different decks, so I think there's tons of replayability. And I actually kind of want to buy this game, because at first, there's tons of cards. Really, the whole deck, it's basically all over. And reading every line is kind of overwhelming. But after playing it a few times, I really got the hang of it, and it was a fun experience. AUDIENCE: Every time I play this game, I also play [INAUDIBLE] with my friends. And every time, someone pulls out some interesting combination combo that we haven't seen before, and it's like, oh, well. AUDIENCE: So let me ask how you do drafting, because online, I heard there's rituals around the drafting process. Did you have house rules? AUDIENCE: Usually, you do observation first and then [INAUDIBLE]. You deal everyone seven, they take one, pass the rest to the left. AUDIENCE: OK, and it's a public sort of process? AUDIENCE: No, you only see the card that you have, until you can pass. AUDIENCE: Oh, so you get passed the same, you pick one, and then-- AUDIENCE: You start with seven. Everyone has seven in hand. Everyone picks one from their hand and passes the remaining six [INAUDIBLE]. And that person looks at that six. [INAUDIBLE] know all seven cards or not. AUDIENCE: Maybe that was just something that came up through the community in the ritual of playing this game. Or maybe it's written in the rules, but I-- PROFESSOR: Both are played professionally and for money. AUDIENCE: Oh, OK. PROFESSOR: And that's where a lot of the decks came out of. You'll see people-- just like Magic the Gathering-- the new decks come out, you'll get new tournaments going on. Really popular in Europe and Germany. AUDIENCE: So the drafting process-- is that something that spawned--? PROFESSOR: It probably came from a pro play. AUDIENCE: Because we just distribute the cards. PROFESSOR: Which I think is one of the things you can do to sometimes balance skill. Doesn't always work. You're still going to have people who are really skilled at drafting and really skilled at reading the table, based on the cards and having a good memory of what cards came before them. AUDIENCE: OK, I can see that. So it's a fun game. Definitely check it out if you haven't. PROFESSOR: Cool. Got another more simplified worker placement game called [INAUDIBLE] by-- I don't know the name. And what this has is a lot of the randomness in this one coming from not knowing what's coming up in the tiles. So you're kind of uncovering this forest area, discovering new things, placing your workers in these tiles to basically get victory points. The victory points are traps around the board. So you'll see, actually, this conceit used a lot in these kind of games, where the victory points are like a race, the [INAUDIBLE] at the right does this on a number of things [INAUDIBLE]. Very basic shapes for the pieces. Simple color, natural wood colors. And I think you can tell the difference-- I'm not sure if they do a colorblind test on this, but I assume that colorblind folks can see the different [INAUDIBLE]. Puerto Rico is kind of the board game version of, a simpler version of Race for the Galaxy. A resource-generation game. Again, played around rounds. Each person has their own plantation that you're basically building up. And the big thing with this series of games, and Race for the Galaxy does this as well, I believe-- so if you're the person who chooses what happens in a round or a phase-- I forget exactly what it's called-- then the person who chose it gets a privilege that they can do something a little bit extra. So part of the play is choosing the role based on both whether it's going to be good for you, or whether it's going to hurt the other people around you. Basically denying privilege and making something in the game happen before somebody else might have wanted it to happen. Dominion, classic Magic-based card drafting game. Lots of different types of cards of multiple number. And again, you're playing for victory points. If you have played Dominion before, I recommend playing one of the other games. If you haven't played Dominion before, it's really easy and really fast. So it's 30 minutes. Your first playthrough will probably be about 45. I have not played this one. And I'm going to find out what it's about when I open the box. It's in multiple languages. Again, board with racing numbers around it, much smaller board this time. And what are we to be doing? You're archaeologists. You're trying to acquire knowledge for an excavation exhibition. You're planning excavations and exhibitions and getting victory points from digging in [INAUDIBLE] and finding valuable artifacts. So yeah, unfortunately, I can't exactly tell you exactly what this is about. But it's likely resource gathering and a little bit of worker placement. So those are the 1, 2, 3, 4, 5, 6 games. Grab a game, set up a group. I'll call time at about 2:30 and see where we're at. So don't put your game away yet. If you guys could come back over to Race for the Galaxy-- what I'd like each group, each game being played, tell us a little bit about-- well first, tell us what the game was. And tell us about-- this is assuming you were able to get this far-- how randomness and player choice entered into the game. How was it used in the game, how did it play through? Don't just focus on the statistics of probability, but also talk about performance, and feel, and the tactileness of the pieces in what you're doing. So you guys go first. AUDIENCE: All right, we had Puerto Rico. The randomness had to do with, I guess, right here in the tiles that the plantation tiles that flip up. The player's choice comes in with the role that you can use for that round, and also, I guess, whatever strategy you choose, be it building up enough money to buy stuff, or trying to get enough colonists so that every single one of your tiles is occupied. AUDIENCE: So you had a lot of choices. Basically, at the beginning of the round, you pick a role, and each role has a special privilege that you get. And then everybody, including you, perform an action based on that role. And so on your turn, you pick a role, keeping in mind there's a lot of public information. So you can see what's good for you and also perhaps not as good for everyone else. And then each one of these roles, the actions-- everybody gets a choice on how they perform it. So for example, he was saying, these little tiles that flip up-- one of the roles lets you basically take one and put it on your board. So there's choice there between how you compose your board. And those lead into getting different resources and victory points and other such things. PROFESSOR: How did it feel? AUDIENCE: It was really annoying to set up. Tactileness-- nothing special, I guess. It's a lot of tiles, little bits you use. These are goods-- I think this one is tobacoo, maybe? Corn. Those kinds of things, you sell, you get gold or victory points for. Here is a gold. PROFESSOR: You mentioned earlier about the affordances of the pieces to the play mat? AUDIENCE: Yeah. Is there [INAUDIBLE] play map? AUDIENCE: Oh, here. AUDIENCE: Yeah. So this is what the play board looks like. So you can notice, basically, there are two main types of grid. One's little rectangles, 2 x 1. And one's a 1 x 1 square. And so we have a couple of different-- so here is one thing [INAUDIBLE] clearly 2 x 1 and fit in that 2 by 1 square, and same with this one. And there are also little 1 x 1 squares. And so one of the interesting things is that with this board, you can tell there's clearly a limited number of things you can put on this grid, right? I don't think there's a mechanic for removing something once you've placed it. So it'll fill up eventually. So you have to keep [INAUDIBLE]. While you do want to build things quickly and try to get more resources at the beginning of the game, you also don't want to just fill your board with crap and then be stuck at the end of the game. So it's kind of similar to Dominion, in that sense. AUDIENCE: And while they do building [INAUDIBLE] really well, because [INAUDIBLE], maybe they could have done the other bits, where they're roughly the same size and shape. And it was a little confusing [INAUDIBLE] square at the very beginning. PROFESSOR: How did the rules support those piece sizes? You've already got some affordacnes going on with the pieces. Do the rules piggyback off that? Do they take advantage of it? AUDIENCE: Not really. AUDIENCE: There were some diagrams that were pretty. AUDIENCE: Some of them, but they also didn't explain very well. They didn't diagram the player board at all, so basically, there's nothing here. This is just buildings and plants, and these are plantations. I think you just kind of figured it out, like I guess we did. AUDIENCE: [INAUDIBLE] down, though. AUDIENCE: Yeah, it did. AUDIENCE: [INAUDIBLE] AUDIENCE: We're only on our second round right now, like our second full, going around. Because the setup-- first of all, just sorting all this stuff out, it's kind of like Dominion, where it's a pain in the butt to sort out all the tiles, and the cards, and whatever. But also just trying to figure out what we're doing, and what's going on for the first round was really slow. AUDIENCE: I mean, I guess there's a little picture of buildings, and a little picture of palm trees, so-- PROFESSOR: It gives a lot of space on the map to describing these things that are already described on the card elsewhere. AUDIENCE: Kind of. So these roles-- AUDIENCE: [INAUDIBLE] PROFESSOR: That's true. AUDIENCE: This is a very skimmed down what's going on, basically. [INTERPOSING VOICES] AUDIENCE: --actually a little more information, and the rulebook is the full information. AUDIENCE: [INAUDIBLE] AUDIENCE: Yeah, but I don't know how much I agree with putting the one-line summary on these little role cards. Because you could look at this, it says "trader"-- what does trader do here? AUDIENCE: [INAUDIBLE] information. AUDIENCE: Yeah, basically. I can understand putting it on the board instead of the rulebook, because no one want to have to look through the rulebook every time. But on here, it's an immediate I can see the roles, OK, I can look at the board. AUDIENCE: Do you think redundancies are bad in general? PROFESSOR: No, redundancies are great. Redundancies awesome. We use them as best we can. But you have only so much spaces, in the pieces and boards that you make you've got to really decide what's the most important thing for the person to know. And the most important thing-- probably need to tell them more than once. Through rules, through the play mats and pieces, through the just natural, this fits here and this doesn't fit there kind of thing. But yeah, definitely. And then text-- some people hate it, some people like it. I'm in-between, use it as it's needed. Great. Race for the Galaxy is kind of similar to this, isn't it? AUDIENCE: Basically. PROFESSOR: There you go. So tell us about Race for the Galaxy. Randomness, player choice, and then affordances for the pieces. AUDIENCE: [INAUDIBLE] AUDIENCE: Yeah. AUDIENCE: Yeah. [LAUGHING] AUDIENCE: [INAUDIBLE] AUDIENCE: Huh? AUDIENCE: Yeah. AUDIENCE: [INAUDIBLE] game [INAUDIBLE] Yeah, [INAUDIBLE] are really just based on what cards you got dealt. PROFESSOR: Can you show us how-- somebody do an example layout while another person's talking? AUDIENCE: You had a starting world, which was like a starting hand, which contributes a lot of your initial luck. And then the card that you drew earlier particularly for-- you could say [INAUDIBLE], if you don't get any useful cards, then you're sort of stuck, and you just have to [INAUDIBLE]. Whereas if you get good cards, then you can sort of-- they'll allow you to [INAUDIBLE] So a wrench in the game is that the game is designed to-- it's like there is sort of a way to have an engine by every turn by producing points every turn. But by the time you actually get the engine up and running, the game ends in one or two turns, anyway. I don't know. PROFESSOR: Does that feel like what you guys were doing in this one? Struggling to build something that would produce stuff in the end? AUDIENCE: Yeah, yeah. PROFESSOR: And on the other games, too? AUDIENCE: It's producing immediately, actually. After the first full round, I think every one of us got something to sell to get stuff back, basically. AUDIENCE: It's interesting to me that those are the same games, sort of. But this has so many bits, and bobs, and gadgets [INAUDIBLE]. And [INAUDIBLE] PROFESSOR: From a deck of cards. What is this doing that that one is doing, too? AUDIENCE: The role selection. You choose what you're going to do, but everyone gets to follow suit with that. Then you get a bonus [INAUDIBLE]. AUDIENCE: The little bit there-- they exist here, but they're in the form of cards face down [INAUDIBLE]. And so it's not apparent that you have [INAUDIBLE]. AUDIENCE: Did you have to sort out those goods beforehand? Or is it just-- AUDIENCE: Nope. You literally-- it's like if you suddenly get a mining good on a mining planet, you take the top card in the deck, put it face down under there. PROFESSOR: So you're taking cards out of play when you're doing that. By generating a resource, you're reducing the amount of choices. Granted, there's a ton of choices up there. The deck is really large. So it's not making a huge effort. Is there a card that you can kind of bury cards underneath and just completely remove from play for this one? AUDIENCE: I mean, [INAUDIBLE] could be put [INAUDIBLE] until you discard it and then [INAUDIBLE]. PROFESSOR: So in San Juan, there's a chapel where every time you put a card underneath it, it creates a victory point, and that card can never be used for the rest of that play session. So a good strategy in that game is to take things like another high points-giving card that maybe you can't to build, and somebody else could, and burying it-- removing it from play. AUDIENCE: Yeah, this game, you just hold it in your hand, never play it. PROFESSOR: Yeah. Can you talk about the player mats? The kind of player aids it gives you? AUDIENCE: A very expansive cheat sheet. AUDIENCE: Yeah, the game uses a lot of iconography on the card. One team used a lot of the iconography there. PROFESSOR: To somebody who hasn't played before-- was the iconography helpful or hurtful? AUDIENCE: It was certainly very helpful. I think just thinking along the lines of [INAUDIBLE], there are very few in this game. So even though the iconography has a hand next to a card with a "2" on it, I still had to work to make sure what that means. AUDIENCE: On the other hand, one thing that they did was hexagons are victory points. They have the hexagon victory points that are actually little tiles that you can see. But then on the cards, you can see that on every card, there's the victory point value, and that's in a hexagon. And whenever there's a card that says gain victory points, you see it has a hexagon. PROFESSOR: Yeah. It could have very well just be shitty fireworks just flowing out, like this is the card to use. It's calling it out, really clearly out. That's great. AUDIENCE: Or the shapes-- I don't know, the icons make use of the card shape. They have a little rectangle-- PROFESSOR: There's also diamonds, and rectangles, and circles, yeah? AUDIENCE: Yeah. Diamonds and circles are for the two types of cards. PROFESSOR: Are they in different places than [INAUDIBLE]? AUDIENCE: Yeah, each card has a hexagon thing that [INAUDIBLE] as well as the [INAUDIBLE]. PROFESSOR: So yeah, thinking about placement of where you're putting your numbers, and where you're putting things on the card, where your eye goes when you look at the card. I mean, those are really complex-looking cards there. AUDIENCE: They even have a little-- in the upper right, they have sort of reminder symbols on the cards there. Because there's [INAUDIBLE] just to remind you, oh, this card is-- you might forget about this power, you should remember it. PROFESSOR: Yeah. That game-- I think all of these are made by the same company. Yeah. Rio Grande? Oh, no, [INAUDIBLE]. So you're going to see different kinds of-- what do you call it-- style guides that they're using. The rules for these three games are very similar. I think Race has a little bit more advanced layout, but there's the basic kind of layout, where the rules on one side, some columns and sidebars on the other side, the occasional diagram. But that designer believed in, everything goes on the cards in some form. Everything's there. You can play the game without the rules, unless you have a really good memory. AUDIENCE: I mean, the thing is, you need to use expansions. And you can introduce expansions, and you don't need to read the rules in the expansions. And they'll introduce a whole bunch of iconography, and you'll look at it, I'm like, oh, I bet this is a [INAUDIBLE]. And I'll bet that this is what card means, and it's usually right. PROFESSOR: Yep. Cool, so we're still doing good on time. [INAUDIBLE] Randomness, player choice, and then affordances. AUDIENCE: Well, the randomness came when we first dealt out the application cards and the minor flipping cards. There's the [INAUDIBLE]. [INAUDIBLE] we deal out seven to eac player, for each. So that's sort of where the randomness comes in. AUDIENCE: And it gets amplified, because these occupations and improvements seem like, you could just go creating things with them. He got something that let him [INAUDIBLE] throughout the board that he would continue to play. And then he had another thing that he could use that allowed him to double that amount. So it really feels set up for-- yeah, it seems like you can just make these crazy combinations that basically give you farming superpowers. AUDIENCE: I forgot about that part [INAUDIBLE]. PROFESSOR: What was it like choosing? How did you decide what to choose, when to choose? Did you get far enough? AUDIENCE: Yeah. In the beginning, I had no idea what I was doing and why I won. But as we got through, I started to get a sense of it. And now, I'm finding it easier to think about, OK, I'm going to need wood to build a room. I'm going to need food, because the harvest is coming up. But at the same time, I feel like I don't have a good enough sense of what's the deck. Because he was just pulling out things that I had no idea it could even happen. And then like, oh, wow. OK, that changes a lot. PROFESSOR: So what's the choice for him is random situation for you. You just have no clue that might even pop up, because you don't have that information, right? AUDIENCE: Yeah. So I think it would take a while to get a good sense and fully understand what's happening. But I could already plan what I'm trying to do on my end. PROFESSOR: Can you talk a little bit about the affordances of it? Tactile feel, how did it feel when you were playing the game? AUDIENCE: I like the board setup [INAUDIBLE] know that either [INAUDIBLE]. And the [INAUDIBLE], the [INAUDIBLE] shapes, [INAUDIBLE]. In terms of the resources, it felt weird, because they were all either little circles or cubes. And you just have to remember what the colors mean. But then again, they're also [INAUDIBLE]. The little pictures on the common area that tells you what they are whenever you [INAUDIBLE]. AUDIENCE: Yeah, and there's this interesting thing where every round, if a resource hasn't been taken, then it increases. And now let's say there's three woods sitting here. The next round, there'll be six, the round after, nine. And so on, and it keeps building. And so to make it easy to remember that, what they have you do is they have you put it on these little squares to start out with, so you can know what you just added. And then there's an arrow pointing into this little bucket, so you know, OK, place it here to set up the round, put it into the bucket so that everything's there. And I feel like it'd be pretty easy to lose track, like, oh, have we added wood here yet? We kept forgetting to add sheep [INAUDIBLE]. PROFESSOR: So you're farming. There's animals, and there's other things-- vegetation, stone, raw materials. It breaks them up, right? The squares are things that are animated-- they're living? AUDIENCE: Mm-hm. PROFESSOR: Circles are things that don't animate. Is there some kind of affordance going on there? Does that seem useful to you, why those are being distinguished that way? AUDIENCE: Yeah, because animals and resources behave very differently. Because these resources, you're usually expending to either get improvement, or build houses, things like that. Whereas animals, you have to manage a little bit more in terms of OK, I could kill this animal and eat it, or it'll give me points later. Or I can breed them, and I'll have to keep track of spacing them here. So there's a nice separation of what you're really doing with them. PROFESSOR: Cool. AUDIENCE: [INAUDIBLE] PROFESSOR: No, he's not in today. He'll be in tomorrow. AUDIENCE: OK, thank you. PROFESSOR: Yep. Great. Anything else you have to say about this one? AUDIENCE: It's really fun. I'll probably go buy it, because I want to play it again. PROFESSOR: Why is it fun? AUDIENCE: Probably the choices. There's just so much going on, and it feels like I could really make my own strategy. And I don't know-- I just had all these crazy plans of, OK, if he [INAUDIBLE] a room, that means that I can, which means I can build my family next turn, which means that I'll have more actions. And now I can do even more stuff, and get more resources. PROFESSOR: So there's this complicated system going on-- if I can try to remember it, understand it-- a complex system going on that you're going in the right direction, do you feel like you can kind of switch? AUDIENCE: Yeah. PROFESSOR: For people who played it multiple times, did you feel the same way? AUDIENCE: Yeah, because if you and your opponent aren't choosing a choice, it gets juicier and juicier every round. And so it's this game of chicken-- like, are you going to go for the wood? AUDIENCE: Yeah, there was one round where I kept nine or 10 wood. AUDIENCE: I don't know. That was fun for me. I like that. AUDIENCE: Just thinking back on what he said, I actually like this game a lot, too, having a lot to do with all the different choices. And there's a really complicated system that it took a while to kind of understand. And I don't understand it completely yet, but I'm getting a much better feel for it, you know. And I think that's really appealing, not having something so absurdly complex that nobody can actually do it. But not having something really simple, either. Just the sheer number of choices here makes it really interesting to me. Because I guess one of the things about having not that many choices is it almost feels like I could just make a computer play it, because there are only n possible outcomes of the eventual game. But something like this, that's just unbeatable, really, unless you actually make it a really smart player. PROFESSOR: Great. [INAUDIBLE] in the back. Randomness, player choice, affordances based on what you got to work with. AUDIENCE: So there's definitely randomness in it, because you draw your initial hand and every subsequent hand. And a lot of times, you can put together some ridiculous combination, [INAUDIBLE] would just get market, market, market, market, smithy. PROFESSOR: What about [INAUDIBLE]? AUDIENCE: What? PROFESSOR: What about [INAUDIBLE]? Why'd you do that? AUDIENCE: Well, so what he said-- right away, you just draw. And so what you can do is buy these action cards. And with these action cards, you can get free moves. And the market-- I think it's the best one, because you get an action, a buy, a coin, and something else, whatever. And so what I would do is I'd play the market one, then he'd draw. So I'd play the market one, I'd draw. And then I'd draw market, I'd play market, and then I'd draw, and then draw market. And then I'd play market. [LAUGHING] And then when I'd draw, I would draw a smithy. And then you could draw two more cards, and I would just draw two more cards. And I'd be sitting here with 15 coins, or 20 coins, and I can do literally whatever I want. I can buy it however many times I want with however much money I want, [INAUDIBLE]. AUDIENCE: So yeah, there are definitely some really powerful combinations that plays with the luck factor. But at the same time, it feels there's a lot of skill in the game. It does also feel like if I played it 10, 15 times, I would have a very good sense of what to do, and the skill would start to go away. Like if I played it 10 or 15 times, and everyone else did, I feel like we would be always much better at it. And there would be a developed strategy, or maybe two strategies, and counter-strategies, something along those lines. So they were saying it feels like infinite possibilities, only it didn't feel like infinite possibility. PROFESSOR: So there are affordances in the game that can avoid that. What do you think they are? AUDIENCE: [INAUDIBLE] There are a million cards. PROFESSOR: Yeah, so that's the basic set. How many cards were in the basic set? AUDIENCE: 500, [INAUDIBLE]. Although I don't know how many [INAUDIBLE] that actually change [INAUDIBLE] 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17-- lots. PROFESSOR: Lots, yeah. [LAUGHING] AUDIENCE: OK, so that should change it up, I guess. PROFESSOR: Possibly. I mean, still, your inexperienced players will say that exact same. And maybe those expansions-- you can bring an expansion in and change some things up. I don't know. Anybody else who's played Dominion a lot-- have there been strategies that you've seen come and go, like there is in Magic? AUDIENCE: I mean, it entirely depends what's out on the board. PROFESSOR: Yeah, exactly. AUDIENCE: Is there [INAUDIBLE] engine and still have the money to buy things? Or is there just really interesting action, or the other cards [INAUDIBLE] tiny deck. Or are there no specific [INAUDIBLE]? And that changes each time you play depending on how you set it up. AUDIENCE: So there's a related game called Ascension, wish I pretty much prefer more, but probably just because I've played it more, and that was my first one. It's really interesting, because instead of at the beginning, of Dominion, you randomly select a couple different tiles, basically. In Ascension, you actually just have a really, really big deck, and you shuffle that. And you deal out just hold 'em style, just stick cards across the board. And then you take turns just being able to grab something from the middle and then replace that [INAUDIBLE]. So I feel like it makes it a lot more interesting, because it's much less predictable. At any point in the game, something amazing or something crappy could flip. Rather than you know that you will always be able to buy, until the pile runs out of a different kind of card or something like that. So that's something interesting. PROFESSOR: Anything else you guys want to say about the affordances? Like the rules or how the cards are laid out? Anything that you got out of that? AUDIENCE: I guess it's kind of a bigger game. So the rules were incredibly complicated at parts. But that being said, we did figure out, I had to ask him a couple of questions. But after that got settled, I think we all understood it pretty well. PROFESSOR: That one's a great example of look at all these rules, oh wait, the game's actually [INAUDIBLE] simple to play. AUDIENCE: Yeah, if we had someone who knew how to play sitting here, we could have probably started 10 minutes earlier. PROFESSOR: Could they have come up with a different way of writing those things, or laying them out, or presenting them in a way that might have helped you out? I don't know. I bet. Wait, [INAUDIBLE] question there? Oh, one of the questions for you guys, why do you think that was included in the list of these other games? What about that game is similar to some of these other games that you heard? AUDIENCE: [INAUDIBLE] PROFESSOR: No, but that's an interesting-- I just noticed that coming out. That's why I wanted to talk about rules. That's all. AUDIENCE: Definitely the "luck versus skill" dynamic, I think, is pretty key in terms of, like you said, there's strings of luck. But I chose [INAUDIBLE] markets, right? So I was putting [INAUDIBLE] together, and that's definitely skill. So I think the mix is exactly [INAUDIBLE]. PROFESSOR: There's a depletion mechanic in that one, right? Did you read how you end the game? AUDIENCE: Yeah. So we did run out of market-- we never-- we didn't have time to finish. We did run out of market. PROFESSOR: Yeah. And I think it's once you run out of a couple of different kinds of cards, then it ends, right? AUDIENCE: Re-tile before the [INAUDIBLE] card. [INAUDIBLE] PROFESSOR: So you have to always have some kind of-- AUDIENCE: [INAUDIBLE] AUDIENCE: No, [INAUDIBLE]. That's not true. That's [INAUDIBLE]. AUDIENCE: Oh, really? AUDIENCE: Yeah. PROFESSOR: So all these games have some kind of Solitaire-ness to them, where you can be playing on your own. But even if there's like a Solitaire-like work replacement style game, there's something that you're doing is interacting with the other player and preventing them from moving ahead. OK, so it is almost 3 o'clock. You can either continue playing your game or break into your teams and work on your projects. It's your choice. I'm going to go grab all the kits. We've got one person-- are you one a team? AUDIENCE: I'm not on a team. PROFESSOR: Not on a team. So we've got one person joined a team. We've got a three-person team? AUDIENCE: [INAUDIBLE] PROFESSOR: For a two-player game or a four-player game? AUDIENCE: Two to four. PROFESSOR: Two to four? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, so I would recommend hang out with them for the last hour. And then maybe even scrounging around to find a team to go onto. AUDIENCE: [INAUDIBLE] PROFESSOR: What's that? The other teams are four-two people? You can have a five-person team. AUDIENCE: [INAUDIBLE] PROFESSOR: [INAUDIBLE] check out what [INAUDIBLE] Yeah. Yeah, not requiring you to be [INAUDIBLE].
|
MIT_CMS608_Game_Design_Spring_2014
|
19_Space_Control.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: --have your attention. I'm just going to go into the lecture part. I was hoping to start with the pitches, but since so many people are missing, we'll wait and do the pitches after the lecture part of class. And then we'll end class with a chance for you to play games based on the lecture. So it's gonna be kind of a weird lecture, talk about your game, play games with it, or lecture, and then time for you to talk in your teams about your game again and to continue working on your projects. We'll bring in the prototype kits for that last bit, so you can start building stuff. So today's reading was [INAUDIBLE]-- where's my copy of [INAUDIBLE]? Hey folks. So today's reading was I think chapter 11 and 12, was that right? Space control and [INAUDIBLE] And the reason why I wanted you to take a look at these, is because it's got two sort of historical investigations-- well, it's got a whole bunch of historical investigations, a bunch of different games. But it kind of goes in depth into two particular games, one in each chapter. So people, remember which game is covered in the reading? AUDIENCE: No. PROFESSOR: No. Really close to that, but you're one letter off. AUDIENCE: Go? PROFESSOR: Yes, Go. And the other one was-- AUDIENCE: A chase game-- PROFESSOR: Chase game called-- AUDIENCE: [INAUDIBLE] PROFESSOR: Tafl. The second chapter that I had you read today starts with like fox and geese and a bunch of things, but it really goes in depth into tafl. And this is probably the most in-depth sort of historical research that you find in this entire book. Most of this book is like the stuff in between Go and tafl. Where it's like, four paragraphs at most on any particular game, and it's just talking about variants, right? He talks about reversi, which is, what, 200 years old? Something like that. I can't remember the exact number, but he only gives like maybe two or three paragraphs on this history. Everything else, is just this is how the game is played. These are the mechanics. And so that if you wanted to figure out how this game-- all the things that could come up from the game, you could recreate it, just based on the illustrations and the mechanics. But when it comes to Go, and when it comes to tafl, it's a little bit more about the culture surrounding these particular games. And that's a very different take, and an interesting take. So if you are interested in doing sort of academic research in games, that's one way. Kind of like game archeology, game historian work. One of our grad students, who's now doing his PhD, Jason [? Bakey, ?] he's kind of doing that for train games, and evolutional train games. Even though, obviously those couldn't have been invented before trains existed. So they're not that old, but there are tons of them. And he's again, tracking is the evolution, but also of the culture, the people who play these games. So let's see, how many of you play Go? OK, about five people. How many of you have played a version of tafl, enough to hitch [INAUDIBLE] tafl. I have no idea how to [INAUDIBLE]. You've played it? AUDIENCE: Yeah. PROFESSOR: On paper, on iPad? [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] I have a board, [INAUDIBLE]. PROFESSOR: Oh, cool. OK. You can actually find craftsmen making these things, tafl boards. Even though the rules, as if you read the book, the rules aren't really well-understood. There's no canonical explanation of here are the pieces, and here are the rules, here's what you do with them. Because if you find a little bit about one, you find a little bit about the others. So there are some archaeological evidence for here are the different pieces for a full set of tafl, and they don't tell you what to do with it. And then there's another set that's just like, just what you do with all the pieces, but it seems to be describing a different set of pieces. Or a different number of pieces, and you're not quite sure. But you get a lot of stuff like poetic records of the significance of the pieces. That gives you a clue about what people thought of these pieces, and gives a clue of what these pieces can do. And again, there are some descriptions of particular plays of this game, which gives you an idea of how these pieces can be made. But viewed within sort of larger context of his asymmetric board game, that is largely involving one group of pieces trying to capture either all or a particular group of the other players. That's basically the whole genre of the chase game, of the fox and geese type game. And viewed from that context of here are all these other games that have survived a little bit better Like the English fox and geese, this is what we can intuit. It is kind of a siege game, where both sides have soldiers, but one of them is explicitly trying to kill the king. But one side doesn't have a king, and is trying to kill the king. And the other side has a king, and a bunch of bodyguards, and they're trying to defend the king. The whole idea is sometimes it's to get the king to an escape route, sometimes it's just survive a certain number of moves, or to get to a particular point on the board. And one thing that I would like you to think about, if you haven't read it, do read through these two chapters just to get a slightly better idea of what you can do with space. It's one thing that we've been noticing, that in some of the games that have been prototyped in this class, space isn't always used very creatively. We see games that are being played out on grids, we've been seeing games where you can move freely throughout the whole grid. But then, what does each individual square actually mean in relation to other pieces on the same board? It's something that these games actually do really well, both the games of territorial capture, which the goal was all about. And the chase games, which are these asymmetric games where you try to surround your opponent. Go and tafl are not the only two games that describe in here. There's a huge amount of variation and thought about what you can particularly do when it comes to manipulating space, being surrounded. What does it mean to have two of your opposing pieces on either side of you, or at the l-shape of you. What does it mean to be on a square grid? Especially in a game like in Go, it's a game that's played on squares. On the corners of all squares, in particular. And it's very, very important that what's to the top, and what's to the bottom. And what's towards the left, and to the right of every piece is really, really important. But what's to the diagonals isn't really so important, until you consider meta strategy. What's the effect of a clump of pieces, rather than a single piece. The other thing about the section on Go, in particular, is that you get this huge amount of vocabulary. Words like liberties, the way how scores are being added up. Words from different languages. Korean, and Chinese, and Japanese. But most of the research in this particular book refer specifically to the Japanese tradition. So most of the terms are used in Japanese. How many of you heard of Atari? Come on. AUDIENCE: You said Atari? PROFESSOR: OK, all right. That's actually supposed to be a derivation of atari, which is a Go term. Don't ask me what it means, I don't actually play Go. But these words have kind of worked its way into even cultures that don't play Go that much, like in much of the Western world. In the same way [INAUDIBLE] has made its way into talking about games of chance, because that means dice in Latin. Die, specifically, in Latin. It's in the same way that sometimes we talk about being checkmated, even when you're not playing a chess game. I'm trying to think of other chess terminology that tends to make its way into common English parlance. AUDIENCE: [INAUDIBLE] PROFESSOR: [INAUDIBLE] AUDIENCE: Yeah, [INAUDIBLE]. PROFESSOR: The least worst move. That's a new tone. I don't know if they're specifically from chess. I'm thinking more about like in-- It's a concept that applies to chess in particular, a lot. But not-- AUDIENCE: [INAUDIBLE] it's probably a term that derives elsewhere that's [INAUDIBLE] in chess. But do you think about tempo in games? PROFESSOR: Well, I think about tempo in games. I don't really think about it in chess so much, because I don't play chess. But I've heard chess commentators talk about that a lot. AUDIENCE: This isn't really specifically chess necessarily, but a lot of [INAUDIBLE] can say your move, after you're doing something, anything. Usually moving. [INAUDIBLE] PROFESSOR: There's still a general cheating the world as a game. Very Sherlock Holmes-y kind of, I have such a high intellect that the world is a game to me, and this crime is a game to me. Something like that. AUDIENCE: [INAUDIBLE] I'm just thinking [INAUDIBLE] PROFESSOR: I'm just trying to think of all the chess-- AUDIENCE: [INAUDIBLE] material. [INAUDIBLE] the word material, as a concept of [INAUDIBLE]. AUDIENCE: [INAUDIBLE] position where you're kind of engaging, but you probably [INAUDIBLE] PROFESSOR: Yeah, I think that was a terminology that came from outside chess and that applied into chess. But I've definitely heard checkmate being used outside of chess specifically. AUDIENCE: Oh, what about stalemate? PROFESSOR: Oh yeah, stalemate. Stalemate is a situation I generally do associate with chess more than any other game. But a stalemate that happens in a business situation, or in a negotiation, both of us have no way to be able to gain an advantage upon each other. So that's a good way that terminology from something that's been as old as chess, has been as old as Go, we haven't preserved much from tafl unfortunately. But it can work its way into sort of common parlance, and that's something that he does in the analysis of Go that I don't see in any other analysis inside this one particular book. But you do find it in other people's writing. There's also a lot of talk about who play Go. Right now, what is the difference between the Japanese culture of masters. Are like few, almost more scholarly than scholars, kind of rarefied elite, who are assumed to be the absolute best players of all, of Go. At least in Japan. Although, that [INAUDIBLE] the best player of Go in Japan, you're kind of the best player of Go in general. But then, in Korea, you have exactly a 100 person professional association. Professional, as in you are going to make money playing this thing. There's like a broadcast culture around it, kind of like e-sports before the e. [INAUDIBLE] game on TV, right? And there's a lot of hype, there's money involved in it. And becomes this rarefied, grand master class that you're trying to get into. Because once you're into that class, then you get to make the big bucks, a TV personality. In Chinese culture, it's a little bit more of a very historic thing. It's this thing that you could play, but you're playing it for historical reasons. It's not a concept at quite as contemporary a game as mahjong, for instance. Which is very much a social game, this is a game that you play with friends. It's like a game of poker when you're playing a game of mahjong. [INAUDIBLE] doesn't have that kind of cultural resonance. Even though, it's the same physical-- more or less, the same physical game as Go. I think there's some minor scoring changes, but otherwise the play of the game is the same. So the evolution of how this game started off as almost a spiritual activity. It was something that was promoted by monks, as something that helped meditation. And then moved its way from that into royal courts, and then eventually, the rest of the population. For people who aspire to those kinds of high, lofty, social positions. It was seen to be a sort of activity that really smart people will do. And then, if you can demonstrate that you're really smart, then you can get jobs for really smart people. Which apparently pay pretty well in Asia. Then if you've got the actual description of how the game is played. But even in this particular write up, the way how they describe the play of the game goes beyond the mechanics very, very quickly. Because the nice thing about Go, is that the actual rules of Go are pretty darn simple. The idea of Go is pretty darn simple. There are black and white stones, each player plays all black or all white stones. You play on the grid. I'm not going to draw the exact number of lines, but-- it's that sort of thing. You play on the intersections. And if you've got a situation like this, where you've got one colored completely surrounded by the other color on all four sides, this piece is eliminated. It's captured, it's taken off the board. And you can't repeat the same board [INAUDIBLE] immediately. Like if I have a weird situation where I've got a black piece here, I've got a black piece here, I've got a black piece here. I think that's right. And then I immediately place a black piece here, so I captured this piece. Your opponent can't then put a white piece here, and capture that piece. Because that returns the board back to the state that it was just a couple of seconds ago. That's basically the rules. There's a whole chunk of rules on how you figure out who won, by basically scoring things. And I've seen students show me many different ways of scoring that, some of them pretty elegant, some of them pretty clumsy, but they all add up to the same numbers. So it's basically a lot of different algorithms to get the same result. But in the end, the basic idea is just surround your opponent. But once you get from that into all of the higher level strategies of the-- OK, out of this game mechanic, what are all of the things that you have to do to be able to play out this game to your own advantage. You start going into things like, how important is it to be able to capture the corners of the board, for instance. What are the typical opening moves that you see in this game? Where in his same analysis, in the same chapter of reversi in Othello, it just largely comes down to, are you playing your pieces like this. Or are you playing your pieces like this. Or do you not have the choice at all, because you're playing Othello, and that's what you started with. That's basically the entire analysis of your opening moves of Othello that he gives. But then when you're go into Go, there's a whole section on why the opening moves have such a long term impact. And like chess, there is an opening game, there's a mid-game, there's an endgame. The end game is kind of like, no real surprises at that point. You're just kind of rounding up what was developed in the mid game. The mid game is where all the really sort of strike from behind, caught you by surprise stuff happens. And the beginning of the game is just setting up those situations so that that kind of thing can happen. I think it is nice to be able to try to imagine what a game that has been around that long, that can develop all that richness. This cultural richness, its game mechanic and strategy richness. That sort of linguistic richness-- how nice it will be to be like the person who designs the game like that. Except no one will ever remember you, which is kind of sad. But also, what would that kind of game have to have? And I like also indulge in a little brainstorming. It's like, if you wanted to-- if it's explicitly your goal to work with a team, to try to develop a game that people will be playing 1,000 years from now. What would a game have to have? What do they need to be? AUDIENCE: It really has to be just like chess or go, in the sense that it has to be simple to learn, and infinite strategy. It has to be entirely unbreakable. PROFESSOR: Unbreakable? As in-- AUDIENCE: As in no computer could ever solve Go. In the history of humanity, it'll just probably never happen. PROFESSOR: Well, no computer's solved it, yet. I'm not sure if anyone's proven that it can't be solved. AUDIENCE: No, [INAUDIBLE] PROFESSOR: Yeah. AUDIENCE: Probably can. The numbers are astronomical. The chess numbers are astronomical. [INTERPOSING VOICES] PROFESSOR: The test numbers are huge. AUDIENCE: [INAUDIBLE] no way to simplify the problem. If you're saying [INAUDIBLE] can't be solved, [INAUDIBLE]. Rubik's cube can be solved [INAUDIBLE]-- any Rubik's cube can be solved in 20 or fewer intervals. [INAUDIBLE] AUDIENCE: They actually sort of just figured out [INAUDIBLE] and then solved all those pieces. [INAUDIBLE] PROFESSOR: So again, that's hard to reduce into a simple form, which then can be solved. It gives you an idea that there's always going to be at least a minimum a level of complexity. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, I can imagine that. But there are sports that have been around for a real long time, and you know, I try to think of one that's been around for 1,000 years. I'm thinking javelin probably has been around at least that long? Those are dexterous games. But [INAUDIBLE] make a board game, you're gonna make a board game to last the length of time, it can't be something like a flicking game. I've played flicking games that have been around, but maybe not 1,000 years. AUDIENCE: [INAUDIBLE] PROFESSOR: How long has chess been around? Someone with Wikipedia should answer that question. AUDIENCE: [INAUDIBLE] PROFESSOR: It has gone through several different versions. Like there's a time where the queen was a vizier. We're talking about versions of the game that existed in India, and in the Middle East. AUDIENCE: Sixth century. PROFESSOR: Sixth century, yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: I think the most recent version-- the most recent change I think, would be the power of the queen. And that was Queen Isabella? AUDIENCE: Kind of, it's like mid 1400s, early 1500s in Spain. [INAUDIBLE] but that's about when she gets the-- PROFESSOR: The zoom right across the board thing. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. And I'm not so sure. There are a lot of iterations over how a tournament needs to be played out, the long details. But then things like-- basic things, like how does a piece move. I think the biggest change was probably the queen's. So I was talking about like 1400s or 1600s, but then that's a 200 period. When you have a bunch of really powerful European queens, right? That inspires this. AUDIENCE: The game has to be very, very, very balanced. PROFESSOR: Very balanced? As in, both sides has a good chance of succeeding? Yeah. AUDIENCE: [INAUDIBLE] but it's not like-- PROFESSOR: I don't think [INAUDIBLE] is saying that it has to be exactly balanced. Just pretty darn close. AUDIENCE: [INAUDIBLE] PROFESSOR: The 55-45 is not close at all. I [INAUDIBLE]. AUDIENCE: [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: Play a [INAUDIBLE] AUDIENCE: So consider the concept of grand master play [INAUDIBLE] in tournaments where they're playing [INAUDIBLE] they're not going to play to win every single game. AUDIENCE: [INAUDIBLE] something along the lines of like, right now, a vast majority of games are [INAUDIBLE]. AUDIENCE: So think of the people behind chess games, why would that be [INAUDIBLE] against [INAUDIBLE]. AUDIENCE: Go being black is supposed to be a six point advantage. AUDIENCE: I wondered, does this game have to be competitive in nature? PROFESSOR: Does it has to be competitive? Can you make a cooperative game that-- AUDIENCE: Think are competitive. We still don't know the rules for [INAUDIBLE] Ur is before senet. PROFESSOR: All we have is the board. AUDIENCE: Senet, I think, was solitaire, in one version of it. PROFESSOR: No rules exist. AUDIENCE: [INAUDIBLE] I was wondering if the game happened to be two players [INAUDIBLE] PROFESSOR: Again, let's think, not necessarily Go and chess, but also things like athletics, sports, right? All these things that we associate with games, rather than just a pure test of strength. AUDIENCE: Sorry, but I thought each course it tends to be [INAUDIBLE] so that's sort of an abstraction. But not necessarily [INAUDIBLE] PROFESSOR: I'm thinking, what's the oldest team game that we can think of. Hurling? AUDIENCE: The one [INAUDIBLE] balls, [INAUDIBLE] PROFESSOR: Is that what people associate that with basketball? AUDIENCE: I don't know, [INAUDIBLE] basketball, I just [INAUDIBLE] AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, I think rolling on the ground-- balls rolling on the ground are a little bit more common. So I'm gonna get [INAUDIBLE] AUDIENCE: So I was gonna say, sort of based off [INAUDIBLE], it seems like it's helpful if it's the sort of game where you can build up a culture of competition around it. It seems that seems to help keep things rolling. PROFESSOR: So it implies that there's some type of skill that you can develop, and then you can compete against other people. AUDIENCE: [INAUDIBLE] PROFESSOR: Let me get Laura first. AUDIENCE: [INAUDIBLE] look at any evaluation six and a half points. LAURA: So for-- AUDIENCE: [INAUDIBLE] LAURA: According to what [INAUDIBLE] percent for white. But it said that, and this kind of makes sense, if you're like novice players, it's pretty much [INAUDIBLE]. It's much closer. Which makes a lot of sense. And it also means it does affect people in some way to like try to learn strategy, because [INAUDIBLE]. PROFESSOR: The more you know, the better it is for you when you start off as white, as opposed to it being just like a completely [INAUDIBLE] thing. AUDIENCE: For a lot of these, I think a cultural incentive can help a lot. So for both these games, you were basically considered intelligent if you could play it well. And then for a lot of the sports, you were-- if you won the Olympics in ancient Greece, or something. That was a huge deal. [INAUDIBLE] cultural significance could really help you develop more of a-- an entire population might try as opposed to a game that doesn't really matter. PROFESSOR: So what I'm hearing is either you'll end a war by playing a game, that's one way to get somewhere, like real cultural significance. And a lot of athletic events, the Olympics are kind of about that sort of thing right. Or you get a king to play that game. [INAUDIBLE] Which could work, definitely chess has had intellectual leaders. You know, artists for instance. I try to remember which authors were really, really into chess. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. I wouldn't be surprised if military leadership were into that. I'm actually thinking that maybe instead of balance, what we're going to say is that each side has a fighting chance. It's not so off balance that games often end in foregone conclusions. Even when black has a disadvantage in chess, black can do something. Like force a stalemate. And that gets that both player's agency over the outcome of the game, even if what they're trying to accomplish is different. White is trying to accomplish victory, black is just trying to prevent it. AUDIENCE: In chess, actually, the most recent rules changed [INAUDIBLE] 2001. PROFESSOR: 2001? That's a while ago. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, it's pretty stable. I will say that once you've been around that long, you've had enough tweaks to your rules that things can work out. People are still changing the rules of things like American football. That's-- AUDIENCE: [INAUDIBLE] PROFESSOR: Sometimes those rules changes are for things like, well, we need to make it easier for us to insert ads into this game. And in football and such, it was just like that. So whether that makes it have longevity, I don't know. But it certainly [INAUDIBLE] the current health of it, it works with the current economics of times. I don't know if Go and chess ever did that, but but maybe that's why the intellectual-- if you can go into the actual cultural value, you can go for commercial value. That works too, right? And you know the idea is, if you want to make a game that is really, really going to stand the test of time. Either as a full game, or a sport, or a board game, card game. AUDIENCE: I guess now, it doesn't have to be simple learned, but it should also help make it simple to teach. Pass on the game really easily. PROFESSOR: You've got to get generations to pick it up over time. AUDIENCE: Especially where it's a board, then two different colored rocks. So even if you don't have manufactured board, you could teach someone. PROFESSOR: Like a [INAUDIBLE] or on the whiteboard. AUDIENCE: Or like, pennies and quarters. PROFESSOR: Yeah, exactly. AUDIENCE: And I was going to say basically it also helps a lot where it's simple. Because obviously that means that you [INAUDIBLE], and that it adds extreme amounts of value, the ability to pass it on. PROFESSOR: In sports, you are sort of like it's-- you are sort of constrained to essentially having a large enough space, flat space, that you can play a game. But assuming that's something that's fairly easy to find around the world, maybe increasingly hard as time goes on. Something like soccer is very easy, right? You know, it's like you don't really need a goal, you just need things to mark goalposts, and corners. And then you can play. It's probably a little bit harder to do something like baseball, where not only do you need to be able to mark out the playing space, but you've also got to have all the right-- a stick and thing that you can throw without hurting anybody. And theoretically, you could play baseball with a rock and a stick, but you know, that game's probably not going to catch on. Maybe it does. AUDIENCE: Building off of [INAUDIBLE] old objects and simple the game [INAUDIBLE] PROFESSOR: It is amazing how that's worked with chess, because chess has so many different pieces. But you know, you've seen boards being made by people who were basically hermits, handcrafted stuff. And pieces that just like made out of like-- people who were like survivors of a plane crash [INAUDIBLE] waiting to be rescued, and things like that. You can actually reproduce the border pretty easily. So it can just use pieces to be able to reproduce that. I wonder at what point complex becomes too complex. Because if chess is just on the right side of just easy enough to reproduce, I can't imagine it gets much harder, much more complex than chess, to get to the point where we just can't reproduce anymore. AUDIENCE: Mahjong or shogi. PROFESSOR: Oh, mahjong pieces. Oh, good Lord. AUDIENCE: There's something about them that's going on. Chess piece doesn't quite-- [INAUDIBLE] AUDIENCE: Chess is kind of nice [INAUDIBLE] PROFESSOR: This is sort of like-- AUDIENCE: There's like an accepted aesthetic for it, but you could play it with cut out pieces of paper into different shapes. PROFESSOR: Right. Or you just draw the icons on it. Or you can do the Lord of the Rings chess set for instance, where instead of a king, you've got Gandalf, or something like that. AUDIENCE: Another game as an example is mancala. [INAUDIBLE] century. Another one that you can play with rocks. AUDIENCE: Mancala's a great one. I think I [INAUDIBLE] lot about mechanics, but I wonder if it's a cultural or symbolic ritualistic that is also required? So like mancala, you've got cultural [INAUDIBLE], or so it seems. I learned how to play it in Bible study school as a kid. I forget exactly what the message was, but it was [INAUDIBLE] [LAUGHTER] PROFESSOR: Something seems to [INAUDIBLE] AUDIENCE: It's that [INAUDIBLE] cultural stuff that is built into religion there. I mean, chess kind of has war. We really saw that in the '70s. But what is go got-- what does this say? [INAUDIBLE] Taoism? PROFESSOR: Yeah it's this sort of Taoist monk's kind of thing. So there's a certain amount of spirituality that's associated with Go. There's a lot more spiritual writing that I've seen on Go than on say, military strategy writing. Whereas chess, is very much a military strategy thing. AUDIENCE: The Egyptian game of senet. You put the board into the Pharaoh's tomb for the Pharaoh to play after going into the afterlife. It's not played anymore. PROFESSOR: How do you know? Maybe they are still playing it. AUDIENCE: But there's no current cultural significance. At one point, it stopped having cultural significance, so it stopped being played. PROFESSOR: Yeah. I mean, that's a game that people have theorized is actually about a journey into the afterlife. So it has cultural resonance for a society that thinks very heavily about what happens after I die, right. And chess makes sense for a society that's either constantly in battle, or offered in battle. And there was a resurgence in the '60's and '70's here in the US. AUDIENCE: Probably had to do with Bobby Fischer. PROFESSOR: Also that. Also that. Celebrities, again-- it's nice to have a king play your game game. All right. So a couple of things just to think about, we've been talking in class a lot about [INAUDIBLE] games. And you know, especially the discussion about the Parker Brothers. And what it will take to be able to make a game that's going to sell. The last time, we had a guest in here to talk about Kickstarter, or Patreon, and Crowdfunding. How to put a game in a box and ship it to people. But then also, the other direction of what if you just try to make a bigger set of rules that anybody can-- with a ball and a field, or a chunk of wood that had a good curve. A bunch of different colored stones, or holes in the ground could reproduce. And what's the design process around designing those kind of games? It might be that's a perfectly legitimate application of game design. And so you might want to think about that. And now that everybody's here, let's hear about your games. Who needs to use the projector today for the pictures? You don't have to, we can turn it off. No? OK. All right. [INAUDIBLE] shut the projector off. And can the ship racing team come up here? And we're going to give you feedback on your pitch. We're also going to give you feedback on ideas about tips on the game that you're pitching. So imagine that Rick and I are the decision makers of the publisher that you're trying to get money from. You're trying to get seed money to be able to finish designing this game that you're currently designing. Maybe even employees in our company. We're the heads of Hasbro. You're all designers in Hasbro. And you're going to say, this is the game we all want to work on next. All right? Everybody else is kind of like your competition in the company. Yes, but we're not really doing that. They all want something too. So give us that pitch. AUDIENCE: Let's do it. So as you know-- well first of all, thanks for having us. [INAUDIBLE] run through the team real quick. [? Fro, ?] Jory, Matt and Joan. And we are team ship racing. And so we're here to tell you why ship racing is going to be the number one game for Phil Rick Studios. And as you know, games like Agricola and Tsuro are some of the top grossing games in recent history. So since we're passionate about Euro style games, and we love path building games, why not do a mash-up of the two kinds of games? So that's for-- enter ship racing. A mash-up of Tsuro and Agricola. So let me walk you through our vision for this game. There's gonna be a common board that looks something like this. We're experimenting with [INAUDIBLE] size, but right now, five by five is pretty fun in early prototypes. And this is the common. And then you have tiles that go on the spaces. And each tile-- here, I'll blow it up for you-- has two nodes on each side. And there will be multiple permutations of paths like this one. You've got you guys that played Tsuro? Tsuro [INAUDIBLE]? A So these tiles will go together and will help navigate your ship. Your ship starts here, and you play this tile, and it ends up here. So the idea of the game is to start here. It's a race to the finish. First person that can go from the left side of the board to the right side of the board is the winner. The cool part is that each player is gonna have-- this is gonna be a zoomed in view of the ship. So this is where the worker placement Euro-style turn based game comes into play. So on your ship, it will be a simulation. You're going to get a really good understanding of what it's like to be a ship captain. You're going to be able to steer the ship, man the sails, act as a navigation using the sun sky horizon as tools. You can populate the crow's nest, you can be the cook, swab the deck, you could retreat to the captain's quarters to sit with [INAUDIBLE], fish off the side of the boat to get food for your crew. And we're not quite sure how, but these are the two elements of game-play that we want to mash together to really crush it for you guys next year. Is there anything I'm missing? AUDIENCE: Focus groups loved in tests. AUDIENCE: We have 100 pre-orders already. We'd love to hear your feedback. Let's see. PROFESSOR: I'm just going to give you feedback not playing the character that I [INAUDIBLE] I think that being ship captain is actually the center point of this. That's the whole [INAUDIBLE] trying to get people excited about. The actual game mechanic of everything, like putting the tiles together, intellectually and designerly very interesting. But it's not the same [INAUDIBLE] to get [INAUDIBLE] AUDIENCE: So there's like a hole in the market, you can't be a ship captain in any other game, but we're going to give you a chance to. PROFESSOR: I think you don't even have to do that. I think to sell the fantasy, to sell the fantasy of this is the age of sails, or something like that. You are a ship captain for the East India Company. Sort of create the fantasy that [INAUDIBLE]. Like Ticket to Ride, you are the globe, continent trotting, aristocrat, basically. Even though really the game is nothing about that. But that is the fantasy that they're trying to sell would be try to sell you the box. And of course, that is a focus you will need to better flesh out-- what the decision that you're doing on the road, how that influences your progress on the map. Because it was unclear. It looked like the tile assembling thing, it's already game all by itself, like Tsuro. You didn't make it very clear what drives what. Is it the map-building that drives the ship [INAUDIBLE] directions, or do the ship [INAUDIBLE] give you the ability to use on the map of [INAUDIBLE]. AUDIENCE: I hear you. AUDIENCE: I'm curious, what are the primary and secondary sources you're using for how ship captaining works. Why is it that this is-- how do you know that you're actually showing the experience that this is from the perspecive of this person? What's the date? What's the range of dates? What's material you're using, the background material you're using to [INAUDIBLE] AUDIENCE: These are questions that we're asking ourselves, too. [INAUDIBLE] making it explicit, because we really need to dive into the research. We had a lot of game ideas and now we're just choosing this and we're gonna run with it. So I think that's a good next step, thanks for helping us with that. AUDIENCE: [INAUDIBLE] I think so. [INAUDIBLE] I think [INAUDIBLE] would be [INAUDIBLE] You could do a lot worse than just saying, that sounds awesome. [INAUDIBLE] British navy, you are one of the boats that has carried cargo at the same time [INAUDIBLE] It helps that every one of those boats has a map, has a cross section of the ship, [INAUDIBLE] explains [INAUDIBLE] rest of the book is just use of terms. [INAUDIBLE] which symbol [INAUDIBLE] AUDIENCE: We're going to have to beef up our research. Never been on a sailboat, personally. The fantasy is something that people can relate-- [INAUDIBLE] guys. AUDIENCE: Great competitions. [SIDE CONVERSATION] AUDIENCE: Thank you, thanks for your feedback. [APPLAUSE] [SIDE CONVERSATION] AUDIENCE: I want you to close your eyes for a second. Rick. May 28, 1953. Your name is Sir Edmund Hillary and you are 300 feet on top of the world. Philip. Same date. Your name is [INAUDIBLE]. You have walked-- looked at this mountain your entire life, and wondered what was at the top. For the first time in the history of your people, you're going to actually [INAUDIBLE] by your face. You spent the last week climbing this thing. You're finally at the top. You can open your eyes now. This is the experience that we're going to create with Tenzing Norgay. We want to show people what it was like for these two men that had no idea what was going on. There was a second attempt to climb, the first two died in the attempt. And after 12 days of climbing, they set up base camp in each spot, and reached the top. We're going to make a three-dimensional board with 12 different base camps. I'm not gonna draw 12 base camps right now, that'll take a little bit. But each level essentially is a base camp. If you've ever played the game Deception-- Descent, there is one kind of dungeon master type player. Who, it's not like Dungeons and Dragons dungeon master, who creates an entire game. But he controls the monsters, controls tracks, how they track, stuff like that. We're gonna have a similar element with the environment. The environment challenge that you face various equipment failures, and just the unplanned for things that happen during the climb, will be laid out by this environment master. It will be him against the two climbers on their way to the top. There will be different boundaries placed on you, based on the stuff that they had to fight for. The last two base camps, you have to ascend in 24 hours. If you don't, you will die of oxygen asphyxiation. By the same logic, when these players pass the [INAUDIBLE] base camp, they will have called three turns. We'll have to work on the details to reach the top. Things like this can make the experience realistic, make them feel like they're actually on the climb, on the mountain, trying to reach the top. Become the first person in the world to make it. Simple enough, if you make it to the top and get down alive, you win. You go down this street, you get knighted. Sounds good? That, my friends, that is [INAUDIBLE] PROFESSOR: Source material that you're looking at right now? AUDIENCE: I found a couple websites on just the climb. I also read a book two years ago, not necessarily about this climb, but on the 1996 Everest disaster. Which goes into a lot of detail on what it takes for a pair to climb Everest. The challenges that people face, how they feel while they're climbing. And then, obviously, the worst case scenario of what can happen, when you get caught on base camp four in the middle of a blizzard and 15 people die. PROFESSOR: [INAUDIBLE] AUDIENCE: I planned on doing that the entire time, and then you said [INAUDIBLE] [INTERPOSING VOICES] PROFESSOR: If you have a strategy going in, always volunteer to go first. [INAUDIBLE], about your style of pitch, and more about the things that you said in the pitch. 3-D board-- Really, really think about how you're going to be handing that in on [INAUDIBLE]. Come in with a box that can actually fit this, give us instruction about how we're supposed to assemble this thing. Make sure that it will hold up to repeated assembly and disassembly. Possibly something that may not be essential for your game, something that [INAUDIBLE] which is a really interesting-- AUDIENCE: We met yesterday to talk about it, and I said I was thinking about a 3-D board, if that would look cool. And both Matt and Eddy both said they had the exact same thought. So I was like, OK. Then we have to do it. PROFESSOR: You might actually want to incorporate the board in your box somehow, so that the box becomes something that can actually be used to support the board. [INAUDIBLE] it's up to you, really. But do consider things like cost and time, [INAUDIBLE] There is actually a genre of mountain climbing games that looks like two [INAUDIBLE] with ropes [INAUDIBLE] straight from [INAUDIBLE] attached to the-- and they both attach to one weight. The board itself, [INAUDIBLE] quarter of the board are slightly inclined and are [INAUDIBLE] there [INAUDIBLE]. So the whole idea is get from point A to the top, to the other side, without falling into the hole. And each player holds onto to a different rope. [INAUDIBLE] A couple of ideas [INAUDIBLE] AUDIENCE: I think for me, if it is Tenzing Norgay, I would've wanted-- hinted a little bit towards this character [INAUDIBLE] AUDIENCE: Mainly, just put that out there because it was the first name that came to mind. [INTERPOSING VOICES] AUDIENCE: So be very clear if that's what you're going to go for. There's actually some really interesting stuff going on there, [INAUDIBLE] He's Tibetan, [INAUDIBLE]. AUDIENCE: He's a local Sherpa, I have I have his Wikipedia page. AUDIENCE: He's a local Sherpa, he's supporting a European-- AUDIENCE: Nepalese. AUDIENCE: To support European climber. Europeans coming in with money, he's coming in with knowledge of the area, how to survive, and things like that. A really rich amount of material to dive into, that may or may not be useful to the game you want to make. So just choose which-- if you're going to do that, choose why you're doing that. So an alternative would be [INAUDIBLE] for the North Pole, South Pole, [INAUDIBLE] Basically, two European guys are going to find the North Pole, or South Pole [INAUDIBLE]. They are competitors trying to get there first. Is your game to competitors trying to get there first? [INTERPOSING VOICES] AUDIENCE: Two situations and the one we decided we liked was two people working together against one environment person. AUDIENCE: Great, OK. Cool. I just wasn't quite sure about that, so that's really great. That's really interesting. PROFESSOR: I like the idea of playing up exactly who these two people were. Rather than two [INAUDIBLE] this is offset. [INAUDIBLE] can buy things. Maybe not in the middle the mountain. [LAUGHTER] AUDIENCE: Actually, that's a really good thing. How much of this [INAUDIBLE]. AUDIENCE: You can start the game off at the base camp, or something. And you'd start out with a certain amount of money. And have to buy the various materials or something like that. AUDIENCE: There's a lot of stuff you can go in there. The 24 hour limit for oxygen asphyxiation, really interesting little detail. So really think about the levels of abstraction you're using, and how much you're abstracting away, how much [INAUDIBLE] Some of this, if you're going to do this 24 hour limit, then that might mean that you're focusing on the climb itself. You're not worried about how you got there. You're just setting the player up with all the materials they need to do it. Just be consistent throughout the game. What level of abstraction are you looking at? PROFESSOR: It's interesting that you brought up [INAUDIBLE] because that implies not only do you have to get to the top, you have to [INAUDIBLE] down. And possibly getting up to the top isn't the end of the game. And that again, comes back to [INAUDIBLE]. One of the things that he does, is make sure that this thing gets published. [INAUDIBLE] The environment does not necessarily need to be [INAUDIBLE] it could be another player. You could make it a two player versus one player game. [INTERPOSING VOICES] AUDIENCE: So when you said when you're positioning as a GM, the way we understand it, is that the GM's really disinterested. [INTERPOSING VOICES] AUDIENCE: I compared it to the set where you're playing-- you're actually playing against the other person. PROFESSOR: There's more than rules enforcement then. Then you're actually working against-- AUDIENCE: There's a set of rules that everyone has to learn. Specified not like Dungeons and Dragons dungeon master, but where you're actually competitive against the other player. And there's a set of rules that you both have to work with. PROFESSOR: You might as well call that Dungeons and Mountains. Thank you, thank you. [APPLAUSE] PROFESSOR: [INAUDIBLE] of which [INAUDIBLE] [SIDE CONVERSATION] AUDIENCE: I'm Liz. AUDIENCE: I'm Laura. AUDIENCE: Michael AUDIENCE: And we're pitching [INAUDIBLE] game [INAUDIBLE]. So a very relevant problem in modern day, is the lack of women in the tech industries. And how they're being generally harassed, and gaming culture, and things that are heavy tech-wise in general. And we're trying to address that by exploring that boss employee dynamic that women in the tech industry currently have to deal with. And very much the reason why women are averting the tech industry every day, 'cause it's just easier to do other things. So we're planning to do a live action role play to explore this relationship, where one player plays as the boss. And he has the ability to create a new rule every minute, or something like that. And one player plays as the employee, and she has to follow all the rules that the boss says. And maybe her ability to be productive is affected by certain phrases that the boss says. Which he's required to say, but he isn't sure-- he doesn't know that they might affect the person in this way. They are just on the character sheet. It would probably be a minimally re-playable game, unless we introduce new characters and new scenarios. And they'd be modular conditions on rules, introductions, and the way they respond. And both players will have goals that require the other player to cooperate in order to succeed. So they're going to have to learn to work together in order to accomplish their goals. So they have to learn to communicate, despite the rules making it hard for them to do so. AUDIENCE: In terms of search source material, there's currently a wealth of information on the internet about people bringing up this issue in the tech community. It's also something that's particularly relevant on MIT'S campus, just because so many people here will go into tech. And so for source material, we're looking at a lot of-- there have been a lot of recent articles written on the subject. AUDIENCE: Also, I have a survey that I sent out for another project that I did, that's basically asking for women and the problems they encounter every day at work. And a lot of these are in the tech industry. And even if they're not in the tech industry, there's still boss-employee dynamics that we can use to further explore that topic. AUDIENCE: So if we have the time, there's an alternative idea that we haven't had as much of a chance to discuss. It's a little less on the side of something that's an educational tool that's minimally re-playable, and you just play it once to try-- end up cooperating. And more for like, maybe toning down the message a little bit for the sake of making it more of a game. And the idea is similar, where the boss is in charge of the rules, as far as the employee goes. So that all of the rules that are in play for a particular section, are going to be given to the boss at the beginning, or over the course of the game. And the employee would be told only whatever the boss tells them, and penalized for whatever the boss decides to penalize them for. And there would be some sort of inadequate system for the employee to disagree. And overall, goals might be less cooperative, and more directly opposing. Where, for example, the employee's goal might be to get the boss fired without getting fired themselves first. And that's something that they might do that would let them do that. Like finding out things about the rules, might take their focus off of being productive, which would generally lead them to get fired. Whereas the boss's job, boss's goal, could be to end up with a golden parachute. End up being profitable, end up exploiting the employee, enough to live happily ever after on their own, and quit themselves. At a time where they have enough to just like, coast by for the rest-- If they get enough out of the employee before having to either fire them, or ending up in trouble themselves, that would be the other side that would be the success for the boss. PROFESSOR: OK. So is there anything else you want to get in [INAUDIBLE] Couple of things. It's [INAUDIBLE] nice to have the clarity of this is-- if you can specify your audience [INAUDIBLE]. You mention minimally re-playable. I'm wondering whether there's some value in playing it twice with inverted roles. AUDIENCE: That was considered. We meant that it's just-- if you keep playing as the employee, and you already know that you're being lied to. And even worse, if the rules stay the same, and you already know the rules, then that's completely defeating the purpose. AUDIENCE: I mean, you could still play it to explore the relationship further, and maybe see what else you could have done. But fundamentally, [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] because it's live action, because you're two people working together, talking about this stuff. Even if the rules are the same every time you play it, that other person is going to be different. I don't worry minimal replayability. It's very replayable. PROFESSOR: And definitely reusable, among-- in a training setting, absolutely. [INAUDIBLE] reusable. If you don't play it again, at least no matter what you do, [INAUDIBLE] how do you talk with each other after the game is over? And [INAUDIBLE] because that sort of helps people reflect on what they should have picked up while they're playing the game. [INAUDIBLE] thinking about how [INAUDIBLE] AUDIENCE: Great thing for that, if you can do that without requiring a facilitator, you're already doing much better than a lot of the games in this sector. It gets your game a little bit more useful, a little more used, if your players can debrief each other. PROFESSOR: You describe specifically the relationship between a boss and an employee. And it seems like that-- I guess you should be clear whether all of the tension specifically between these two people, or whether also there's any abstraction of the college of [INAUDIBLE] Say in a stereotypical, male boss, female employee situation, there's a lot of culture supporting the male boss. That's not there [INAUDIBLE] so how do you put them together? The sort of touchy phrases thing reminds me a bit of Taboo. Which may not necessarily be a [INAUDIBLE] For research, micro-aggressions, I think is one thing that comes up a lot. [INAUDIBLE] you just [INAUDIBLE] get a lot of stuff. Not just gender, but also ethnic discrimination, disability, [INAUDIBLE] You can find a lot of material [INAUDIBLE] actually. Because-- AUDIENCE: Just to make clear, those were the kind of very two, very separate-- AUDIENCE: For future-- for pitch-- don't pitch two ideas. I had a hard time following the second one. If you're going to pitch the second idea, a very simple two line-- and if we did this other way, this is how it would be. Enough to get it [INAUDIBLE] AUDIENCE: I mean, it obviously came up here because we're not at the stage where we have one idea. AUDIENCE: So we heard a great, clear idea that's still not specific, but pretty general. And then with this extra route, this alternate route, it's really just a matter of knowing a two line version of how is this different from the other, without having to go into detail on it. PROFESSOR: Describing it, I think, as a variant, this is a very [INAUDIBLE] right now, rather than here's a second idea. This is just a general pitch. [INTERPOSING VOICES] AUDIENCE: Talked about it, if you haven't talked about it enough, we don't need hear it. If you were trying to get money, [INAUDIBLE] PROFESSOR: But since you did bring it up, I do want to give my perspective from a design point of view. I think that actual competitive game might-- different goals is fine. But it's sort of like, opposed goals [INAUDIBLE] I think takes the game out of a situation that a lot of people can see [INAUDIBLE] in. But a lot of people don't actively want to get [INAUDIBLE] or at least don't see themselves that way. So just to be able to make a game where people can easily make connections to their real life, the goal should be a little bit closer to non-competitive even though it may not be a completely aligned goal. Any questions for us? [INAUDIBLE] AUDIENCE: I had just another quick [INAUDIBLE] for research. We did an equal pay game [INAUDIBLE] two years ago, I think. And there were some interesting things that came up with like, talking about specific scenarios. So how does the fact that women don't necessarily-- at the time, it was thought people didn't have the data to get a pay raise. So maybe it was just a matter of, well, they don't have the data. So if we just give them the data, maybe it would help them. It's not much more complex than that. [INAUDIBLE] see what kind of scenarios are you going to do. Are they going to be very specific, like going to your employee to try and get something that you need? Or is it going to be what I think I heard, and correct me if I'm wrong, an employee going to a boss to try and just do the job they need to do. And the boss [INAUDIBLE] can get difficult for them to do it. PROFESSOR: There's a lot of stuff about how the same adjective, feelings, you use to describe a female professional or a male professional, it can be positive in one sense and negative in the other. It's the same word, literally. [INAUDIBLE] definitely [INAUDIBLE] AUDIENCE: And really good seeing you already did a survey of getting actual phrases that you can use, and actual perspective [INAUDIBLE] AUDIENCE: So I guess I have a question. Which is, how would you recommend dealing with the fact that the players might not be the genders that they're playing? PROFESSOR: That could be a very valuable learning experience. AUDIENCE: No, I understand. I mean, the fact that if it even-- if you just said about the phrases. If you're having a conversation, and that word comes up, it just won't mean anything necessarily to someone. [INTERPOSING VOICES] AUDIENCE: Game mechanic for that. If it's known that that word is going to have a positive or negative affect to that particular gender, then you make a mechanic around that. Your mechanics should support these things. There's times when you don't need a mechanic because players will all understand. But any game where you're talking about these kinds of cultural things, the mechanics kind of just help buttress the message. The things you're trying to get across. PROFESSOR: Yeah, all the research implies [INAUDIBLE] game mechanics are just making the system obvious. [INAUDIBLE] AUDIENCE: That stuff should just come out in debrief. Especially if you have two people with different genders and different experiences [INAUDIBLE] [APPLAUSE] PROFESSOR: [INAUDIBLE] [SIDE CONVERSATION] AUDIENCE: We have a third group member, Damon, who's not here today. So our game mostly explores the dynamics of the sort of the balance of power that can occur when there's a bunch of-- when there's three people who are sort of all fearful of the others there. And how there's an unstable equilibrium there, where they're trying to keep up with the other two to make sure that no one gets far ahead of them. But the game's dynamics will be such that this is guaranteed to fail in the long run. In such, one of the things that players will have to do is try to realize when they have a large enough advantage to strike, and it's really about finding the moment of opportunity when they can make an attack that will work out well. They mostly explore some of the-- it's intended to be sort of a strategy game. But sort of lighter than some of the versions that cover similar topics that can take several hours. It's going to cover a lot of the historical trends around the time. So the conflict between populist reformers and the conservatives, and the rise of the equestrian class, and their importance as a power base to some of the leaders there. And also how the legions, the loyalties of legions was shifting from the Senate to individuals. Who are actually paying them, and giving them the lands that they would retire to. That's what this game covers. PROFESSOR: Source material? AUDIENCE: Source material is I took a class in Roman history, and it's mostly just stuff that I remember from that. I've looked at [INAUDIBLE] sources of lines for detailed stuff online, like the number of details of numbers of legions, and where they were stationed, and that's about it. AUDIENCE: So we remember the games about the Tribunal. I don't remember what personal perspective it's from, so the pitch should have that in there. AUDIENCE: I'm sorry. AUDIENCE: So you want to reiterate that [INAUDIBLE] AUDIENCE: Yeah, I just completely forgot to like describe exactly what the game was. It's the [INAUDIBLE] members the First Triumvirate and they're all sort of-- Caesar, Pompey, and Crassus, and they're sort of jockeying for influence. And eventual plan to sort of of usurp the other team and to become the sole leader of Rome. PROFESSOR: For a pitch, you have to assume that the audience doesn't know that. The audience may have heard of Caesar, and not realize that at the time that your game is [INAUDIBLE], he's not in charge. Because everyone assumes-- Caesar means king, right? [INAUDIBLE] who is Pompey, who is Crassus-- especially Crassus. A lot of people haven't heard of him. [INAUDIBLE] You talk a little about light strategy, and I wonder if that means it's still working? Or is it a little bit more of a political points type game? Do you expect it is going to be like a victory point base system? Or is it going to be more about military [INAUDIBLE] AUDIENCE: [INAUDIBLE]. It's not going to have a victory points system. It's going to be sort of-- the way you win, is by crushing the other two players there. There are multiple paths that this can happen. You can get enough legions to become personally loyal to you, that you can just beat the other two people in the [INAUDIBLE]. Or you can gain enough support from various groups in the realm and that you really can just start to snowball. And you can have a tremendously large force that the other players can't deal with there. And it's going to be a war game, but with sort of more-- mostly what it is is action selection. And that the players will have a choice [INAUDIBLE] on their [INAUDIBLE] choice of actions that they can take. The game will probably be such that the players will go in order over about a 10 year period, or maybe a 15 year period. They'll have about four actions per year [INAUDIBLE] That's the sense that, for instance, if the players build new buildings, or conquer new problems, it makes more options available there. Also the struggle, if you-- depending on whether the optimize the conservative to the popular are the populous are sort of in power. Have more power, or different actions are available also. PROFESSOR: So I just had maybe three different envisions, just from that last [INAUDIBLE] of what this game could be. On one hand, you describe it as a Roman war game. And I'm thinking units and pieces on a hex grid. AUDIENCE: It's very much you go and-- it's in what you take, sort of. It's very much [INAUDIBLE] where the players are just choosing if you want to go [INAUDIBLE] you just say, OK. I'm going to go launch an invasion of Gaul here. I'm going to take this [INAUDIBLE]. You have to prepare for it. You have to make sure that you have the position that you would go to do this, that you have the resources to raise the legions necessary there. You also might want to [INAUDIBLE] to call your veterans if you're successful. And then, once you launch [INAUDIBLE], there'd be a factor of luck to determine how successful you are there. And you would probably continue repeating the same few actions to continue your invasion for several years. PROFESSOR: And that gives me a completely different image of what your game is. Which is more of a collecting resources [INAUDIBLE] tokens. That's the second image [INAUDIBLE] And then you said action selection. And I'm thinking that [INAUDIBLE] like a card game. Almost like a game where these that I could be doing right now. Maybe [INAUDIBLE] if you get say, if you [INAUDIBLE]. AUDIENCE: The actions will be normal in [INAUDIBLE] global there. So in the sense that if the populous are in power there, there is a set of actions that you can take is sort of different. Because the political possibilities are different. Because the political [INAUDIBLE] So like, if the populous are in power, it's much easier to get land for your veterans, for instance. PROFESSOR: So you still have the same actions, but how hard it is [INAUDIBLE] AUDIENCE: Some of them might get grayed out or something. [INTERPOSING VOICES] AUDIENCE: Specifically, what happens is as the [INAUDIBLE] has mentioned, that the actions will sort of flip over there. Like instead of the populace being able to do this, instead, you're able to get conservatives to do this [INAUDIBLE] PROFESSOR: So I think [INAUDIBLE] of what your game's gonna look like. But that's something you want to get across in the pitch. [INAUDIBLE] board, you could use hand gestures, like you just did. Referencing other games. Just so that when you're talking about the game, you're trying to help me see it in my head. [INAUDIBLE] AUDIENCE: One of the last things. So we're going to have three players. Each be one member of the Triumvirate. I imagine each of those historical people had different perspectives of how they saw things. Were they equal? They had different backgrounds? AUDIENCE: The game is going to be portraying them as having equal views, the same goals, the same views, the same perspectives, and everything. But it treats them different. They will have different starting resources, and different advantages and disadvantages as the game goes through. So like Crassus will have more money, and [INAUDIBLE] money, for instance. I imagined that Pompey will start with a lot more influence than the others. AUDIENCE: Is there a historical or realistic basis for any of that? AUDIENCE: Yeah. Crassus is often called the richest man in history there. He is known for just acquiring such just massive-- he had more money than basically the Roman government. AUDIENCE: So that's going to at least help him with the players starting position. Did that wealth-- is that wealth then carried over to the kind of actions that he's able to do historically? Or was it really just setting up? AUDIENCE: It's going to offer-- I haven't said whether or not that will offer him bonuses towards acquiring wealth, or whether or not he will just have more wealth at the start of the game there. AUDIENCE: From our point of view, for the assignment, we're really interested in seeing the personal perspectives. So you could do the generic, they're each an equal person. It'd be more interesting for us to see be a reflection of the actual personage. AUDIENCE: The differences, I think, are really going to shape their strategy, and how they start the play. So is for instance, in Caesar will have sort of have a bonus towards campaigning, or not necessarily bonus towards military invasions, but making his legions loyal to him. His legions become loyal to him quicker. I'm not sure which of these is sort of-- and Pompey will sort of start out with more powerful and prestigious than the others. But during the first several years of the alliance there, he was obsessed with his wife, Julia, who was Caesar's daughter that he had married there. And he was neglecting lots of his responsibilities because of his wife there. So he will probably have penalties, he would miss every forth action, or something. He starts off in a much more powerful position, and he sort of has to use limited actions to try to not fall behind in the earlier part of the game. So [INAUDIBLE] these players sort of have the same goals, but these small differences are going to cause them to take very different strategies oftentimes. PROFESSOR: So I'm actually going to suggest that the differences are what the game should be built on, not the bedrock of similar [INAUDIBLE]. It seems to me that you are describing that there is an objective reality which all three of them agree on and are just playing on this field with their individual advantages. But I'm going to suggest that you start from the differences. Start from the fact that one of them's richer than everybody else, one of them's more loved than everybody else, and one of them just [INAUDIBLE] And then you can just-- what if the game is all about those differences, rather than everything else that was involved [INAUDIBLE]. Because then we get a little bit closer for how these three particular people see the world, rather than [INAUDIBLE] AUDIENCE: The thing is, I really like a lot of the differences, the sort of [INAUDIBLE] and to have these differences, [INAUDIBLE] I intend to have these bonuses that are really not going to be-- they're just going to be one [INAUDIBLE], something fairly simple. But will cause the players to be playing almost completely different games. Where Crassus is basically trying to take actions that are just [INAUDIBLE] is basically going to try to acquire lots of money, and then use it to brute force his way through money. Try to brute force his way to get the most efficient ways to use this, or try to-- PROFESSOR: Set it up to be [INAUDIBLE] it's not just [INAUDIBLE] project. It really should be all three players end up playing very different games, [INAUDIBLE] AUDIENCE: That's definitely [INAUDIBLE]. Like Caesar's going to be-- he has the potential, but he needs to be able to get the resources so that he can start-- that he can really [INAUDIBLE]. Whereas like Pompey has this power and is basically going to be trying to consolidate it. And try to threaten the other team's growth, and threatening the growth of the other two. Thank you. [APPLAUSE] PROFESSOR: So that was probably as good a time as any to take a short break. We'll have about-- these games generally don't take more than 45 minutes. So these are related to the reading that of space control. Not so many chase games, mostly control games. And although, Ticket to Ride I guess has some sort of set building in there as well. Play through the game. It seems like a lot of people didn't do the reading, so I'm not so sure how much context they give you. But I do still want you to. End up having played a wide range of games before the end of the semester. And then at about 3:30 I guess, a little before 3:30, we'll bring in the prototyping boxes, and you can spend the rest of the class working in your teams. AUDIENCE: Go over this again really quick? AUDIENCE: So Panic Station is a little bit of a-- it's got a lot of trader mechanic. You are trying to survive in a space station, but there's aliens. And they're infecting you, and you're going to infect the others too. [INAUDIBLE] is another one of those path building games that has area control, and other things like that. French, medieval towns, [INAUDIBLE] PROFESSOR: Through the Desert, it's kind of like mapping paths. AUDIENCE: Path building, mapping, you're creating basically a supply chain line of camels through the desert. It's a [INAUDIBLE] game. I've not played [? Purple. ?] [INTERPOSING VOICES] PROFESSOR: And every one of these games that we put up so far, these three games are all about putting tiles down. This one is more about putting-- I was going to say putting dues down, but really you're putting down camels. They're really cute camels. AUDIENCE: This is putting down trains, but you're putting them down creating sets. To create matching sets of train. PROFESSOR: Cool. All right.
|
MIT_CMS608_Game_Design_Spring_2014
|
10_History_of_American_Board_Games.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at oc.mit.edu. PROFESSOR: We're going to do a little talking about the reading, and then we'll play a couple of games. And then the last hour will be time for your teams to able to work on your projects unless, of course, you are playing Monopoly, and your property [INAUDIBLE]. So today is kind of like the historical part of class. I mean, we've looked a little bit at some board games, some card games that [INAUDIBLE]. But now we're looking at a very specific time, largely 1900 to about 1950, prior to World War II. There were a number of games that I could have brought out, things that you would also be familiar with, things like Boggle, for instance. But those come a little bit later. Some of these games, the Scrabble, for instance, have predecessors that come from the time period that we're talking about, but these versions that we're familiar with now, probably had some rule changes along the way. Similarly, how many of you played Othello or Reversi? Yep. So Reversi dates from the time that I'm talking about. It might actually be slightly before 1900. But Othello comes much, much later, like the '70s. And how many of you knew the game of Othello? Looks like [? we're quite ?] [? reversing. ?] So that's kind of like a trademarked version of the rule set, which basically specifies what the initial start position of the game that I believe it was adapted with a game designer who wrote up the rules, set up [? deployment ?] rules and publicized the game of Othello. But the version of the Reversi may be not black and white, may be different colors on each open face is way before that. And Uno comes much later, but in doing this reading, you read a little bit about a game-- I think it was in today's reading-- about a game that Parker Brothers tried to publish that was kind of like Uno, only it had five colors instead of four. So that's the reason why I brought in to this today. It's not necessarily because all these games, they come from 1900, but a lot of them came from, started around about this time. And [INAUDIBLE] local history, Salem is really not that far away, and I'm not sure that the building still exists, but has anyone been to Salem recently? AUDIENCE: Yeah. PROFESSOR: Yes? Then, do they have the Parker Brothers building there because it's huge? Or it was. AUDIENCE: I don't know if it's still-- I'm not actually sure. PROFESSOR: It's probably a tourist attraction. AUDIENCE: No, definitely not a tourist attraction. PROFESSOR: OK. This is all [INAUDIBLE]. AUDIENCE: I think it might be repurposed to [INAUDIBLE] apartments because they've repurposed a lot of-- PROFESSOR: A lot of Parker Brothers properties are now owned by Hasbro, which is also not that far away, over in Rhode Island. In fact, if you look at all of these games, they say Milton Bradley, they say Parker Brothers. But if you look at who currently owns the property, it says Hasbro. That the [INAUDIBLE] And all sorts of games, like Tiddlywinks-- how many of you played those growing up? I never played them. Some of the [INAUDIBLE] showed me how to play them. This is not part of [? the talk. ?] [? Ted ?] briefly talked about the [INAUDIBLE] And anyone remember why Parker Brothers went into card games? Because they were only were making board games. They were making things like ping pong before that. AUDIENCE: They were smaller and easy to manufacture. PROFESSOR: Uh-huh. AUDIENCE: Larger profit margin. PROFESSOR: You print a sheet of cards, you cut it into identical slices, pack it in a tiny, little box. You can put a lot of these on store shelves or a truck or [INAUDIBLE] as opposed to big boxes of Monopoly. So I think that the takeaway from everything that we're going to discuss today is really all about how marketing and sales concerns are going to effect you guys. If you read the stories of Phil Orbanes, [? one of ?] the [? most wanted ?] [? ones, ?] vice president of R&D at Parker Brothers, also, apparently, the best Monopoly player, [INAUDIBLE] material. He's also writing from a very biased point of view. He's writing supposedly as a Parker Brothers [INAUDIBLE]. And you can take everything that he says with a grain of salt. I'm sure not everybody who works at-- I'm sure [INAUDIBLE] as the book makes it out to be. But when [INAUDIBLE] I think we can rely on them and because I think [? it's ?] a pretty good designer, a sort of design diary of how various games became the way that they were. One thing that he didn't go into a lot of detail on was Monopoly because that kind of comes after his [INAUDIBLE]. But there's a little bit of that that I wanted to talk about. And we mentioned that in [INAUDIBLE] it used to be called [? Landlord's Game. ?] This was a patented product. And the name of the original designer was Elizabeth Magie, somewhere right in the beginning of the 1900s, although it took about three years between the patents and type of publishing to actually figure out all the manufacture [INAUDIBLE]. Parker Brothers only came into ownership of this product around about 1974. So it took about 30 years. And the original feedback from George Parker himself was that "the game is rejected because it's too complicated and too technical and takes too long to play." So they didn't actually buy Monopoly from the [INAUDIBLE] They bought it the following year, after Christmas, when they saw how well [INAUDIBLE]. And in fact, George Parker invented the quick rules and, I believe, actually imposed a time limit rule, which does not exists any more. But everything that we've talked about, the Parkers and Monopoly, was something that George Parker and the Parker Brothers were very, very much aware of. They knew that there was a problem with the game. But they also realized how it tapped into the zeitgeist, that there was an opportunity there that could be capitalized. It was one that fit very well with their strengths as a board game manufacturer and publisher. You saw how quickly they could turn around the company to make it [INAUDIBLE] when the demand was there for it and what they had to do to hire people who basically operated coin machines to become jigsaw operators. [INAUDIBLE] So that was a story of why Monopoly made its way into the Parker Brothers collection. It was just [INAUDIBLE]. And it certainly wasn't the design of the game that attracted them to acquiring the thing. They just wanted to dominate the American board game industry and in some cases, the British and Western Europe game industry, as well. Clue, on the other hand, it's one of those things that has a very British origin, [INAUDIBLE] before it came over to the US. But I think in writing, we had a couple of examples of how the Parker Brothers [INAUDIBLE] initially and then eventually hired people based in London to try to acquire work that had been already patented in London and why the American publishing [INAUDIBLE]. And that kind of gave you an idea of what the board game industry looks like around about the turn of the century. Why is it that it's a little bit hard to think of many examples of prototype games? There's a lot [INAUDIBLE] in chess boards and playing cards. All these things are good. You might have a design on the back of a playing card or such a thing [INAUDIBLE] the king or queen is illustrated that could be copyrighted. But the games that you can actually play on then were pretty much [INAUDIBLE] games, both games are games that you can just really share with [INAUDIBLE]. Demand [INAUDIBLE] but around 1900s is where mass production and mass distribution all come into the fold. It's not the driving that Parker Brothers was getting in Salem because of the seaport. Even though it wasn't the best seaport because, by then, the war was moving the steam ship, and you needed a deeper harbor, like what Boston did, it still had the means of moving large amounts of merchandise. At least two [? barges ?] were [INAUDIBLE] of the United States. So prices-- I just want to give an idea of how recent this whole idea of buying a game off the shelf is. And I'm only talking about the 1900s. And prices, actually-- you might buy a toy. You might buy something that you played a game with golf clubs or something like that, a cricket bat. But you wouldn't necessarily just buy a whole game with a branded title on it, just off the shelf. The jigsaws are kind of already a pretty extreme case where this is its only game, and it's not interchangeable with another game with the same title because [INAUDIBLE]. And until the 1900s, we didn't necessarily have a situation where people assumed that games could be copyrighted or trademarked. And games were just things that people played, and if you liked it, you got the products that needed to play it with, but then you wouldn't necessarily own it. Anyone else could go ahead and get the same number of products or similar products and, basically, the same game. One example of this is something that's not in the reading, but I guess this will be over in [INAUDIBLE]. So how many of you read H.G. Wells? The War of the Worlds? [INAUDIBLE] Do you know that he made a board game? He made a board game with little, miniaturized tin soldiers. It's called Little Wars. You can find it on Project Gutenberg. It's about 24 pages. And the rules are only about six pages long, so it's a quick read. But the previous 18 pages is his development diary of how the rules are made. And basically, [INAUDIBLE] about how he was actually specifically ended up developing a set of war game rules on how you set up troops, how far do they move, how you launch cannons, shot at each other. And it was all based on this one little toy that you can buy from any toy store. So it wasn't like, here's a box that you will buy with H.G. Wells' Little Wars on the title. He just printed it as a [INAUDIBLE]. And if you got that you would read a little story by H.G. Wells on how to [INAUDIBLE] and then right in the back of it [INAUDIBLE] that you can buy these parts. You can play this game for yourself. So again, he kind of owns the book, but he doesn't really own the game. You are expected to find the parts yourself, in particular, those little breech loader cannons. I think actually there were little explosives in it, and then you could light the little cap. And you could use that to fire little wooden projectiles [INAUDIBLE]. It was pretty darn cool. So you can do things like terrain. And there were instructions in there on how you make model terrain. If you made a building, it had to be completely solid, you could fill it up with toy blocks because you don't want anyone to put their troops inside the building, but you can put them on top of the building. Pretty cool. But that's some time in 1914, 1915. That's already past the date when these products were starting to appear on the shelf. If it gives you any indication of what it was like before that-- is that people printed out rulebooks. And you can still find books for card games, for instance, on your shelf, like 101 Games You Can Play With a Deck of Cards, or something like that. But that's how games were shared. They weren't put in a box and shrink wrapped, tied together with twine or something like that and sold as a product. So today we've got these games. Just a quick look through some more details, by the way-- I want you take a look at these boxes that I'm going to be handing around. And look at the sizes of them. I want you to think about how you would put this on a store shelf like Target or Walmart. I guess at that time it would be a [INAUDIBLE] store. And these are modern boxes, obviously. These don't date back to the [? past ?] century. Because I kind of want to talk about, also, the reality of what it's like to be able to sell a product like this in stores today. Actually, I should have brought in a [INAUDIBLE], but the jigsaw box is about the size of the size of box of the time. The first thing is orientation. This is meant to be sort of seen this way or this way, possibly back this way. How many of you have gone shopping in a Target or in a Walmart or a Sears or something for gifts in Christmas? OK. Yeah. You're buying it for family members, [? among them? ?] Friends? For yourself? Anyone buying games for yourself at Christmas? No, actually, this is recorded. I don't think it's any great surprise if I told you that the majority or a very huge chunk of the profit that a company like Parker Brothers or Hasbro will make will be during the Christmas season. But they are not usually bought by the people who are going to play them. They're usually going to be bought as gifts. So all of this stuff is positioned not to attract you to think that, oh, this is something I want to play, but rather, this is something that I want someone else to play. Or I think someone else is going to like this. How many of you bought Monopoly for a friend or for a family member? Yeah. Did you think it was a good game at the time when you bought it? AUDIENCE: Eh. PROFESSOR: Eh. So why did you buy it? AUDIENCE: Because other people liked it-- PROFESSOR: Because other people-- AUDIENCE: It was a gift. PROFESSOR: --because someone you were buying it for might-- AUDIENCE: They [INAUDIBLE]. PROFESSOR: Oh, they were your-- AUDIENCE: Yeah. And [INAUDIBLE]. PROFESSOR: And maybe you remembered liking it when you were their age. AUDIENCE: Yeah. PROFESSOR: Anyone else? AUDIENCE: We got it for my grandpa. PROFESSOR: You got one for your grandfather? And had you played with your grandfather before, Monopoly? AUDIENCE: No. PROFESSOR: No? A [? lot? ?] OK. So maybe this is a opportunity to play this, OK. But had your grandfather played it before by the time-- AUDIENCE: I'm sure. He's old. PROFESSOR: Yeah. The game's old. AUDIENCE: That's right. PROFESSOR: Who else? I thought I saw some other hands. AUDIENCE: Yep. I think we played with my cousins at Christmas. I think it's the only thing any of us play. PROFESSOR: Is that like the thing you can do with your cousins? AUDIENCE: Well, they're younger. We have a kind of wide age range. We have second or third grade up to high school at the time. Yeah. PROFESSOR: This is a hardcore, ruthless, capitalist game for second graders, right? But you can play it. But people have fond memories, not necessarily of the game itself, but of sessions playing Monopoly with family and friends. And even whether or not the game's any good, you kind of hope that other people will have those fond memories of playing it with friends and family, maybe even because it's you giving the game. So you might end up actually playing the box that you bought. But that's what all these games are designed to do. These game sets are packaged to be gifts. And so you look at something like Battleship and Risk and Monopoly and Clue, they're huge things that look good if you wrap it up with paper. And Tiddlywinks? Tiddlywinks, it's hard to make an argument for a much bigger box because we really don't need a lot space for a Tiddlywink box. Go ahead and open it, actually, so we can see what's inside. I bet it won't be there. [LAUGHTER] PROFESSOR: In fact-- AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. And probably the box-- let's see how much of that box is actually occupied by stuff. There is a bell. And that requires a certain depth of box. AUDIENCE: And [INAUDIBLE]. PROFESSOR: [INAUDIBLE]. [LAUGHTER] So that's a little deck of cards and a bell. And the bell, I'm not really sure the bell was clearly in the original version of it that was started in [? the beginning. ?] The deck of cards certainly was. OK. AUDIENCE: [INAUDIBLE]. PROFESSOR: Good. Is there a copy deck? AUDIENCE: Yes PROFESSOR: OK. There actually is a copy deck. Good. All right. It's not just a cardboard pole. So again, these products are all designed to catch your eye-- it's really up on the store shelf [? like that. ?] I believe the standard shelf is 18 inches. That's [INAUDIBLE]. That's a standard industry width. If anyone's worked in retail and knows that number by these different [INAUDIBLE]. And if you sell boxes, basically, this wide, you can sort of stack them either on top of each other. And then put one vertically in front to show it. This is Risk, and if you take that box off, then you can see a whole bunch of other Risk boxes that are back there. You can take that. I believe that there is, with the Candyland and the Chutes and Ladders-- that's kind of an interesting situation where they're a little bit off, they're a little shorter, both horizontally and vertically so that you can actually fit three in a row. So the Candyland, the Chutes and Ladders-- there's probably some third very simple game-- but I can't think of one-- that all fit in a row. And then you just see this giant wall of Hasbro. That's what they want you to do when you come in. Where is it the boxes are placed in a store, whether it's Target or Walmart or some specialty game store-- it's all paid for and sold ahead of [? price. ?] How much Hasbro pays the retailer for placement determines whether it's the first thing you'll see when you come into the store or whether they're stocks that they are keeping in the back room that you have to ask for. That's [INAUDIBLE]. So maybe they'll look at one display case somewhere where they're [INAUDIBLE], but for the most part, you're not going to be able to pick a game off the shelf unless the board game manufacturer themselves have actually paid for that shelf. And they're going to make it back, for both of them, they're going to make it back by the end of the season because that game [? is repeatedly going to be ?] sold. And it's OK that they're selling the boxes to people for [? any ?] [? or Monopoly. ?] [INAUDIBLE] to somebody else. [? You all might ?] already have a copy of Monopoly, but they don't want people to take that in consideration when they [? want to buy this. ?] How many people have multiple copies of Monopoly? I have two. OK. How many? AUDIENCE: Four, maybe. PROFESSOR: Four? Different kinds of Monopoly? AUDIENCE: Yeah, they all have the different skin, like there's [INAUDIBLE], Star Wars, Millennial, on and on. PROFESSOR: There's just different kinds. AUDIENCE: Yeah. I have Star Wars, Nintendo-- PROFESSOR: Nintendo? AUDIENCE: --Pokemon, and then the original. AUDIENCE: Yeah. PROFESSOR: I can imagine Pokemon. I'm interested in Nintendo. AUDIENCE: So we have Junior. I learned to play Monopoly when I was, like, three years old. But that [? has big board. ?] PROFESSOR: What's the difference? Is it-- AUDIENCE: Everything is just simpler. And I also do like that the dollar amounts are one, two, three, four, five instead of all these hundreds and stuff. The ones don't really matter. PROFESSOR: Right. You don't have to have to [INAUDIBLE] under a hundred. AUDIENCE: Yeah, [INAUDIBLE], but it's basically the same game with no Community Chest. PROFESSOR: Is it [INAUDIBLE]? AUDIENCE: Yeah. PROFESSOR: I mean, [INAUDIBLE] at least? AUDIENCE: Yeah. PROFESSOR: Yeah. OK. And you have the Junior, too? AUDIENCE: Yeah. AUDIENCE: And then, I have Pokemon, and I also have those electronic ones. PROFESSOR: Oh, are you talking about electronic devices that you carry or something you load into a computer? AUDIENCE: Both. PROFESSOR: Oh. OK. AUDIENCE: I have a disk and I have a Monopoly thing that's it's kind of Game Boy, but it's Monopoly. PROFESSOR: Oh, OK. I can see LEDs that light up. AUDIENCE: Those ones with the disk, you're playing against the computer or against a person, and the little Game Boy thing, you're playing against the computer. PROFESSOR: Cool. Yeah? AUDIENCE: I have the new one. PROFESSOR: Different versions? AUDIENCE: Yeah. Regular, I think [INAUDIBLE], I have a rip off of Monopoly that's my town. And so different business in town-- back in 1985, now they don't exist anymore-- paid for their business to be bought on the board. PROFESSOR: Oh, so it's [INAUDIBLE] game, basically. AUDIENCE: Yeah. And I have Junior Monopoly, and I also have Junior Monopoly, dinosaur theme. It's by far the best. PROFESSOR: [INAUDIBLE]. AUDIENCE: Yeah. I do think the board was a little bit smaller. I don't think there were quite as many spaces. PROFESSOR: OK. So was it the best because of dinosaurs or because it's actually best to play? AUDIENCE: Well, Junior Monopoly is best to play, [INAUDIBLE]. PROFESSOR: Oh, it's the dinosaurs. Right. I think the [? version ?] is very similar to the local town version. And it's properties of the [INAUDIBLE]. Does anybody know where the original road names cam from? AUDIENCE: Atlantic City. PROFESSOR: Atlantic City, New Jersey. And that's the version that we've got. The version that we've got now is being published by Winning Moves Games, which is kind of like a boutique Hasbro subsidiary that sells classic versions of all of these games. And so they sell it at fairly higher price points. I don't know if the price tags are still on these games but you can take a look. But they're more expensive than the versions that you buy in Target. They have little pieces. Therefore, people who like physically buying these games want [INAUDIBLE] rather than the cheapest [INAUDIBLE]. I'm trying to remember other things that Rob mentioned. Every little thing that you'll find inside any one of these boxes costs money. That's just [? the only ?] thing about that. Scrabble, you all the have the tiles. You have the board itself. If you're lucky, there is a nice little piece of cloth tape holding the board together, which [INAUDIBLE] opening and closing and can fold up flat. If you're unlucky, just [? one that ?] doesn't keep [INAUDIBLE] from folding in half and falling out later on. That crease, it's just not going to last. Risk, of course, has had many different versions where some of them with soldiers and horses, some of them with just numbers. I had the version at home that was big [INAUDIBLE] class of numbers. There was a designer who came to our lab who spoke about Risk. He used to work for Hasbro. And he had a version where, instead of soldiers and horses, it was arrows. And so you move all of these arrows. And it was like a war map, and all [? these forces ?] [? were marked. ?] Unfortunately, the arrows were flat. And you couldn't even pick them up. And the side things were kind of sharp and pointy. And people kind of could get hurt. And that version [INAUDIBLE]. There's a good story about why that happened. So today, when you actually play these games, I'd like you to actually take a look at all of these pieces and from [? there, ?] you might want to try to [? determine ?] how each piece actually costs and just add up what the manufacturing costs of this box might have been. I don't know if anybody here is from chemical engineering or any kind of manufacturing. Actually, Todd, you know [INAUDIBLE]. SPEAKER: Yeah. Not injection molding, but-- PROFESSOR: So it might just be a fun exercise versus, like, a deck of cards. AUDIENCE: And I'd also recommend reading the rules and actually try to play it the way the rules say, not maybe the home rules you might have remembered when you played it as a kid just to see how they are presenting this game. How are they presenting the rules to somebody who just got this as a gift, and they're going to try to figure out how to play this thing. PROFESSOR: Yeah. See if they've improved the rules that [INAUDIBLE]. I actually hear about a classic version of Monopoly where the original set of rules are [INAUDIBLE] updated print out. You can see how they described the Scrabble game. Does this one have a bag? Does it have a bag? Oh, wooden pieces. And a [INAUDIBLE] bag and think about how that makes things easier. [INAUDIBLE] And if you've played these game before-- and I think many of you have-- definitely go into it by reading the rules first, so not just playing the same way that you remember. And also try out some of the other games. OK? Cool. Let's talk a little bit about the games. So I think everyone got a chance to. play. It's all right that the jigsaw puzzle game is still going on. So let's talk about the pieces. Let's talk about things that you're moving around. I mean, how many of you feel that the pieces that you are playing with are significantly different from pieces that you remember playing with? No? Yeah? AUDIENCE: For Candyland, the board is [? totally better. ?] PROFESSOR: Oh, yeah. So the board design is kind of insane. AUDIENCE: It's really scary. AUDIENCE: And I remember it being a little candy thing like a square. [LAUGHTER] AUDIENCE: The board is potentially like this. It's like-- PROFESSOR: It's like [INAUDIBLE]. It's like a [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. PROFESSOR: But the actual cards weren't all that different, right? AUDIENCE: No. I remember the game being each space [? like a candy. ?] PROFESSOR: Right. AUDIENCE: Now, each space is a color. PROFESSOR: Oh, OK. AUDIENCE: I thought I remembered that. AUDIENCE: Me, too. AUDIENCE: Yeah. PROFESSOR: Wait, colors or candy? OK, colors? Hands up. All right. Different pieces of candy? OK. AUDIENCE: Well, he-- PROFESSOR: Well, I think it was-- AUDIENCE: It wasn't every piece, every space. But there were a number of every other space had some kind of image on it. PROFESSOR: I think it still has that. I think it still has the occasional. AUDIENCE: [INAUDIBLE]. PROFESSOR: The thing is that if every piece was a different kind of candy, it would probably be more colorblind-friendly because you'd be able to actually just see the kind of candy even if you couldn't tell the colors. AUDIENCE: It's not very colorblind-friendly. [LAUGHTER] PROFESSOR: This is not friendly with anyone with eyes, I guess. [LAUGHTER] [INTERPOSING VOICES] PROFESSOR: But that's just the visual design of the board. The actual spacing on the board is not all that different. So you've had the chance to pick up the pieces, slam on the bell and pit and stuff like that. Some of these pieces, like in Clue, have a lot of money put into some parts that aren't actually all that useful, like the knife and the lead pipe and things like that. Was that solid metal? AUDIENCE: Yeah. PROFESSOR: Were those pieces? They are sort of like metal pieces. AUDIENCE: It's a sharp knife. PROFESSOR: It's a sharp knife? AUDIENCE: Well, it was when I was 10. PROFESSOR: So because, in assignment two, you are thinking about aesthetics, what do you think about the choices of the materials that they used for the various parts of the games that you played today? Or that you [? four ?] played today? AUDIENCE: Well, one thing that bothered me with Clue was if you committed a murder, or if somebody committed a murder, and you find a body, you're going to know whether they were murdered with a knife or a pipe. They don't look the same-- or a gun. [INTERPOSING VOICES] PROFESSOR: A very astute murderer. I don't know. So yeah. There is a problem. I played a version of Clue where the body is down a dark staircase. So you can't actually see what's at the bottom. It's kind of-- you only see the outline. So I think that was kind of the visual explanation for that. But you're right. It's not a very good justification. Although, if people remember the movie, he's kind of stabbed by everything, right? He's hit by a lead pipe and stabbed by a dagger. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. Which was what was the first blow? It doesn't matter what hit him after he died. [LAUGHTER] How about for Risk? You've got horses, people, and cannons? Is that right? AUDIENCE: Yeah. PROFESSOR: Yeah. I always found it difficult to remember which was which, like what was what value. AUDIENCE: I seem to recall that the pieces were relatively bigger. So I remember the cannon being at least as big as the cavalry, maybe larger. And that way, oh, that's worth more than the cavalry, which is worth more than that little guy. PROFESSOR: The pieces could just have been shrunk due to cost. It's possible. I remember big pieces, too, but we had that [? compilation. ?] And maybe we were smaller, so the pieces looked bigger. [LAUGHTER] AUDIENCE: Our pieces were also Roman numerals. PROFESSOR: Yeah. I remember seeing people who had that type of pieces and envying them because all I had were freaking Roman numerals. But they were easy to count, at least. AUDIENCE: Yeah. PROFESSOR: All right. So for the rest of today, you've pretty much got time to work in your teams. A few of you can continue playing, finishing up your games. The prototyping materials are all up here. Feel free to come up and grab what you need. Rick or I will be in the room at any given time. So you can come up and ask us questions. But use this time to be able to meet up with the team. SPEAKER: Yeah. If you want us to play your game, we can't, but we're in class play tests on Wednesday. PROFESSOR: This Wednesday? I have the schedule here. One second. Let's see. March 12th. Yes we do, in fact, have a play test this Wednesday. So make sure your games are ready. Make sure you have a draft of your rules so that you can test whether people actually understand your rules. OK? On Wednesday. [INTERPOSING VOICES]
|
MIT_CMS608_Game_Design_Spring_2014
|
17_Guest_Lectures_by_Professional_Designers.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, So let's get started. Today we have got the pleasure of two local game builders. So they're here to talk to us about some of their work. We've got Glenn, and I don't have your last name in front of me right now. Given. Glenn Given. PROFESSOR: Glenn Given from Games by Play Date. And you're located in New Hampshire? GLENN GIVEN: Yes. Well, 2/3 of us are. It's like a three-person studio. Two of us in Andrew and one of us who's in [INAUDIBLE]. PROFESSOR: Great. And then you do board games and card games? GLENN GIVEN: Yes, a lot of them. PROFESSOR: And then Mack. How do you spell your last name? MACKENZIE CAMERON: Mackenzie Cameron. PROFESSOR: And he is one of the co-organizers from the Game Makers Guild, a local meet up, meets up at various places around the Cambridge area for people who are interested in developing board games and card games and meet with each other and learn more about how to make them. So each here, we're going to go just Glenn then Mack. Talk a little bit about what you do, about the kind of work you do, and then afterwards, we're going to open up for questions. GLENN GIVEN: Sure. So make games. I make board games. I make card games. And I try and never stop doing them. I don't have a particular-- anything that I can play that I can make happen, I'll do it. We do Targi games. So I just got 2,500 copies of my first party game delivered to my house three hours ago, which is interesting, moving those myself and then getting here. But I also am developing a gigantic, stupid board game that's just called a Big Dumb Wargame, which is like these guys play Axis & Allies or any of those old. So I used to play that when I was in high school because I liked being lonely and then I decided I wanted to make a version of that that's actually fun. We're also working on a game called Pack the Pack, which is-- you guys know Cards Against Humanity? So they're running this contest called the Cards Against Humanity Tabletop Deathmatch, and they got indie developers from all over the country to submit stuff. And so one of our games is in that. It's like if you've ever played Old Diablo, remember when you would collect loot and you would have to fit it in your bag and it was all weird shaped? So it's been inventory Tetris games where everyone is pulling tiles, which are effectively dominoes with different images on them, and then you're aligning them to make gems. So it's like an analog version of Super Puzzle Fighter. But the moral to the point, everything I've just described to you doesn't tell you anything about the games except in reference to other games that I talked about. But that's how I do design. I look at games that I like and then I dissect them and take the parts that I want and then put them back together with other parts and see if there's a theme that goes with it. But yeah. Also, I did printing and publishing for 10 years, and then that drove me madder than normal. So I left that to do this. So me and my team very rapidly design games. We bring a design all the way to a publishing level in about a month, from conception to execution. They're not big, huge endeavors, but it's good to stay really busy with it and exercise with it and realize that sometimes a project is going to fail and you can back away from it and go, ah, well, that month is kind of a bust. And so we're actually also exploring different kind of business models for board games because this isn't really a huge leap. There's very few people who are going to be very wealthy on it or even do good on it. MACKENZIE CAMERON: Scare them off, right? GLENN GIVEN: Yeah, seriously. So maybe you make the next Settlers of Catan and then you're selling a steady 5,000 copies a year, which is really not that much when you think of it. Depending on the way your business is set up, you may see a fraction of that money. So that's why a lot of very, very good designers, like I'm a big fan of Bruno Faidutti who just came out with Mascarade. Mascarade is a really cool game. MACKENZIE CAMERON: [INAUDIBLE]? GLENN GIVEN: Yeah. So he'll put out a couple of games, and then he just gets a cut from each of them. But using short-run printing, we directly manufacture our monthly games, and then people subscribe to us through stuff like Patreon. And then we send them physical copies of the games that we make. And I brought nine different games that I've made in the past six months. MACKENZIE CAMERON: I didn't bring any. GLENN GIVEN: Yeah. Well, you can borrow some of mine. And then you can borrow the bad ones. And then also we go to Kickstarter for larger games, ones where we want to make multi-thousand copies of it. But yeah, I will never stop talking. OK, cool. Yeah. Board games are really awesome. I think that you guys know that. And they're more awesome now than they used to be, not just because they're more complicated because complexity isn't what makes a game really good. In fact, usually a really complex game is really bad because it's a sign that there was a huge problem and that the designer said, oh, I know how to fix this. I'll add another page to the rule book. That's poor planning. But games are better now because people want them more. I mean, what I think the reason people want them more is because we crave physical, analog time with other people now more so than when I was a kid because we're super interconnected with smartphones and all that stuff that we can't live our life without. I'm twittering all the way down here doing 80 on the highway because I'm a bad person. But the ability to say, OK, I've been staring at these screens all day long, and my entire life is scheduled. I need to schedule my relaxing time and it needs to not be me staring at the screen. So I think there's something in the zeitgeist of that personal feeling that people had that has caused a resurgence in face-to-face gaming. MACKENZIE CAMERON: Yeah. Because Even like apps and stuff, I will straight up just hands down beat anyone that plays the Race for the Galaxy app than playing them face to face because the skills that you need are completely different. Just like this board games do really well that you don't find in other mediums. GLENN GIVEN: So I play a ton of Netrunner, which is a really awesome game. MACKENZIE CAMERON: The rule book on that is awful. GLENN GIVEN: The rulebook is the worst. But some people have hacked together. There's a program called Octagon, which is like a virtual tabletop where you can basically simulate any game that's been made. I mean, there's a whole interesting other discussion of how do you do internet piracy of board games. That could be a class. It's a good idea for a class. Anyway, so Octagon is really neat except one of the things about Netrunner is that-- if you played it, raise your hands so I know-- AUDIENCE: Does the original version count? GLENN GIVEN: Yes. Sure. Although the new version is better. You should totally get back into the new version. It's amazing. So it's by the same guy who designed Magic, the Gathering. It has some sort of similarities to that. But what makes it really neat is that it's an asymmetrical game. Like I'll be an evil corporation, for instance, and you might be a scrappy computer hacker. And you're trying to infiltrate my servers, which are represented by my deck of cards, my hand of cards, my discard pile of cards, or like other things that I have installed and find what my nefarious plans are. And you get all these crazy programs that allow you to circumnavigate my defenses, and I get all these programs that allow me to put down defenses, or maybe I get a card that says, I orbitally bombard your apartment building, which is always fun. But a huge part of that game, although you can play it on Octagon, a tremendous part of that game is it's bluffing because what makes the game really rich is not that it's asymmetrical, but there's a huge component of hidden information. As a corporation in that game, everything I do I do by placing my cards face down. So I know what they are, but you don't. And you have to start to count cards a little bit or start to make educated guesses about what you-- can I attack his hand even though I have no offensive capabilities here? Is that going to mean that he explodes like a thing in my brain and my computer hacker dies? There's all that. So a lot of that game is about managing what's called pilt, which is something you get in poker. If you play poker, when you start losing poker, if you forget that poker is kind of a long game and that you should really be playing it with an eye on what the rest of your weak looks like as opposed to this hand, you'll start to play worse because you're doing worse and you'll just pilt. So it's kind of like a tell, but it's more than just saying I've got a bad card in my hand. It's going, I'm losing so I need to play harder, which would be great if it was handball or something, but in a card, that doesn't work because really what you want to do is establish regularity in what you're doing, especially in a game that's that mathematically perfect. MACKENZIE CAMERON: I think I'm going to go ahead and introduce myself. GLENN GIVEN: Please do. I don't even know where I am. MACKENZIE CAMERON: So my name is Mackenzie Cameron. I'm an event coordinator for the Game Makers Guild. So we host a lot of events, which you guys have had any games at any level of prototype, whether it's nearly finished or you just scrapped down something on a napkin and you're like it might play. Let me go grab some dice or whatever, we accept that. Check out our site. meetup/gamemakersguild. Actually, we just recently got gamemakersguild.com. So you can go to that website. GLENN GIVEN: That's a coup. MACKENZIE CAMERON: --which I'm a little [INAUDIBLE]. It's sweet. I'm slowly trying to make sure that we build copies for that. GLENN GIVEN: [INAUDIBLE]. MACKENZIE CAMERON: gamemakersguild.com, which is awesome. I don't know how we managed to get that without somebody else already having it. But I host some of other events that we hose. We have your standard play testing, but we also do indie board games showcases, actually. I've hosted that in Brookline three times now, where we just get anyone that has a cool design. So make your designs, and then we try and bring it out to the public. And then what's the other stuff that I do? I try to get my hands in everything board game related in Boston. I help out at Knight Moves, the board game cafe out in Brookline, which is a lot of fun. If you guys don't know about it, you pay $10, you get in. They have a library of something like 700 games. You can play for as long as you like with anybody that's around. I also do a board game seen to web comic, overboard-comic.com. And then there's one more thing. I'm going to be doing a panel of techs this year. Me and five other folks are going to be talking about board game language, which should be pretty interesting. It's talk like a board game geek, if anyone is going to PAX. Who's going to PAX? Not many of you. You should totally go. It's the best thing. It's The reason I moved to Boston. And then in the sort of a distant future, I've got plans for my maiden voyage into Kickstarter called Killer Croquet, which is the croquet-based murder simulator. That will be a board game. And it's amazing. Not to say that Glenn is wrong about everything, but I'm doing the one idea, build it up, do the kickstarter sort of thing. The rapid iteration is really important, but there's definitely lots of different paths towards making a board game and having it be successful. GLENN GIVEN: I think the main difference is that he has another job. MACKENZIE CAMERON: Oh, yeah. GLENN GIVEN: And I don't. MACKENZIE CAMERON: That's true. I do this for no money at all. GLENN GIVEN: Yeah. I did this for no money too, except that's a problem. So rapid iteration. But do you want to talk about how you went from game idea to the prototype? MACKENZIE CAMERON: Sure. GLENN GIVEN: I mean, do you want to let them ask questions? PROFESSOR: [INAUDIBLE]. Process. GLENN GIVEN: Yeah. PROFESSOR: And then the next question. MACKENZIE CAMERON: So process. Mine is definitely rapid iteration, but I've finally settled on one idea that I actually watch you push forward rather than trying to bundle them up in small ideas, get the one big one. I think the main difference of that is that I'm hoping that with that one idea, I'll be able to use momentum to generally generate more funding for the general brand name of my design studio. But coming up of that idea is definitely a rapid iteration process where you come up with the very sketchy idea. Some designers start with a mechanic, so they're like I want to do something that's deck building where you build decks over the course of the game, which makes an engine that you then draw to then do other cool things. And then other designers, myself, I'm usually like this. Start with some sort of system or some sort of theme, like a croquet, and see if you can turn that into a board game by adding different mechanics and seeing what works and what doesn't, which the Game Makers Guild is pretty fantastic for that. Because when you get a game and you design it and on your first [INAUDIBLE] you look at your game and you go this is amazing. This is going to work. And you try and tell someone about it, and you try and have them play it. And that doesn't always work so well. GLENN GIVEN: It never works. MACKENZIE CAMERON: It never works, actually. GLENN GIVEN: The first time you put your thing down on the table, it's going to maybe burst into flames. MACKENZIE CAMERON: Yeah. A best case scenario is that it bursts into flames and you've made some mistakes. And it's basically unplayable because you need dice. And you realize that when you roll the dice, the numbers always add up to some combination of factors that when you apply that to system, it's like oh, well, you're supposed to move one space, but every time you do this, you actually just die immediately. GLENN GIVEN: Yes. So people can break it in mechanically. You find a tautology element in it and just repeatedly-- MACKENZIE CAMERON: But one of the worst thing that you can do, I had a game that I was developing for a while. I pitched it to Game Salute. And it was a semi-cooperative game. So the idea was that all players have to work together so that everybody doesn't lose, but then once you're going to reach a certain threshold, only one player can win. And kind of a difficult concept to really balance well, but I'm eventually going to really finagle it and get it working. And a bunch of my play testers played it, and it worked out exactly the way I wanted to. They'd play it to a certain point and then they turned on each other and then there's this big, climactic battle, and it was fantastic. I was Watching just the mechanics of how it works. And then I asked them, it seemed to work out really well. How did you guys enjoy it? They said, it was awful. I said but it works. You guys functioned in the way that was really functional. Yeah, but it wasn't fun, which is something that you realize as a designer, you're putting stuff together that the things that you necessarily think are really interesting or even if they're working don't necessarily come across as fun. AUDIENCE: But that's really interesting. They finished the game. They didn't like playing the game, yet they both played the game. MACKENZIE CAMERON: Yes. AUDIENCE: Is that common when you do play tests? [INAUDIBLE] MACKENZIE CAMERON: They're involved in the game. GLENN GIVEN: Usually like, especially if it's in a dedicated play testing group and especially in the Game Makers Guild where it's primarily designers, that has its own downsides to it. But yeah, they'll see it through. I mean, they'll go [INAUDIBLE] on it, push that boulder all the way up just to see-- I mean, even when we found stuff that is completely broken in a game. I was playing a game where it was robots rising up against humans or something and there's this whole propaganda element. And then in the first few turns, we had identified that if you just went to this one space on the board and continued to buy propaganda posters, you could shut every other player out of the game. And so it's just I'm going to go there. I'm not going to move. I'm just going to keep doing this for the entire game until I win. And at that point, you could be like, OK, I don't want to play this game anymore. This is dumb, but we just did it for a half hour and different people tried different things using the rule set to unseat that bad decision. So sometimes even when you identity something that's broken, it just gives you a new wound to start poking at to see does this hurt? It's like a doctor. That's how doctors work. Stab you. MACKENZIE CAMERON: Well, that's that doctor's game. Solve them. That's great. Encouraging play testers to finish a game can be an art in and of itself. Oftentimes, when you get designers that think that they can fix your game for you, it will be hard pressed to get them to finish the game because they're like no, no. This is where you should do it like this. But in trying to urge your play testers, we actually had an event where we brought somebody in who was an expert on generating play tests feedback, actually, for video games, which need a lot of paper prototyping, so it was a carry over. And often times it's just trying to encourage players to go all the way through because sometimes even as players, even with published games, you'll play a game like in the first playthrough you'll be like, oh, this one strategy is completely broken. I don't know if anyone is familiar with the card game Race for the Galaxy. Maybe a couple. But there's one particular strategy where the first time you play through, there's a military strategy where if you have a certain level of military, you can play a lot of cards roughly for free, which is based on the mechanics. And almost everybody that plays the game for the first time realizes, oh, man, military is such great strategy. How did they make this game? And then the second time you play through, two players try and use that strategy, and the third player does a different strategy. And all of a sudden that's the strategy that's overpowered. And oftentimes, because the game doesn't completely lay itself out in front of all the players in the first play test, the perception that it's broken is not always necessarily true. GLENN GIVEN: And you can't find that kind of stuff until you play test it. [INAUDIBLE] Questions? AUDIENCE: Essentially, I think [INAUDIBLE] example of that is [INAUDIBLE], where how many people [INAUDIBLE], like oh, man. This strategy is so overpowering. This person scored 70 points with it, and none of us were above 40 or so. And then you respond like 70. [INAUDIBLE] GLENN GIVEN: Well, there is always stuff like that. I think that illuminates another interesting thing about play testing. The group that you're play testing with really can make a huge difference, not just is this person a designer or is this person not a designer, but how deeply they want to get into mastering that system or how much they care about mastering that system. MACKENZIE CAMERON: Yeah, because we offer multiple tiers of play testing at the Game Makers Guild. We have like just the standard where you come and you play with a bunch of designers, and they tell you it's crap. And you go home and you cry, and then you make it a little bit better and you feel a little better. But then also we have intensive play testing, which is phase two after you get a certain number. We have a nomination and serration system. We're trying to set up a game maker's seal of approval so we can hopefully better pitch our games to designers and the general public. But the phase two intensive was the chance for you to get the same group of people together and play your game a bunch of times. And I mean, again, part of the fun of that for Play Chesters, for a designer, it helps to iterate that even if the game is broken so you can find those little bits and pieces where the games completely falls apart. But at Play Chester, it can kind of fun to just find-- I mean, it's like when you play a video game and find some element of the game where the graphics are screwed up or you fall through the world or almost just the exploration of that. And I have a lot of play testers. They enjoy finding the chinks in the armor, so to speak GLENN GIVEN: Well, I think the other thing about do repeat a play test especially if it's the same group, has anybody played the Vlambeer game, Luftrausers, that just came out? It's great. You should play it. It's Totally worth the money. It's kind of like a Lunar Lander meets Asteroids game. So you're kind of drifting, but you can customize your airplane with different engines and different bodies and different weapons. And they all make the game play-- they all screw with the physics of the game in different ways. Well, unless you put three or four hours into playing that game, which is a lot of time for a very arcady-style game, you won't really know how do I get past this certain threshold of points. And what is my sweet spot in controlling this airplane that I can maximize how powerful I am with it. The same thing in boardgames. Often, it doesn't happen in that time frame. But for instance, with Netrunner or Magical or whatever, you play and play and play and play. And you start to realize that some of the things in this game are just red herrings. They're just included to dilute the power of other things, or they're included to counterbalance very specific scenarios. And things that seem like total waste are actually really important later on in the game and you just can't get that up front. MACKENZIE CAMERON: So just real quick. Going back to Sulkeen really quick is that there is a certain difficulty with the length of games. So if you have a game and you're trying to play test it and you hit 30 minutes and it's not fun, and then you tell your players, don't worry. There's two and a half hours more to it and we'll get it figured out, you've got a much harder row to tow on that, which actually, you'll find I think, in part you'll see a lot more games that are designed for the 45-minute mark because that's usually-- I mean, that's the easiest way to iterate a game. GLENN GIVEN: It's also about the time. I mean, think about the amount of free time you really have in your life. MACKENZIE CAMERON: Well, they certainly have a lot. They're students. GLENN GIVEN: But even that, I mean, there's kegs to stand on and all that stuff. I don't know what they do at MIT. MACKENZIE CAMERON: It's some sort of giant-like robot. GLENN GIVEN: Yeah. You build a robot. I'm sorry. I have a dime in my time because I'm a professional. It's just driving me mad. Otherwise, another important thing about being a designer, develop a mental disorder. That will really, really help you get the tiny problems like dimes in your tie out. So yeah. Half hour, 45-minute games are becoming the thing. As board games become more and more popular in America, because they were always-- well, for the past 25, 30 years they were very popular in Europe. And when we were talking about board games, I think we're not talking about traditional games like Monopoly, Scrabble, that kind of early American family board game, I think. MACKENZIE CAMERON: Because they're still a great game. There's so much. When we talk about board games, we talk about they're roughly this big because the designer's name on the box, they take between 45 to 3 hours. Produce some things like War Hammer and different war games. Because there's whole other cultures out there, which is really cool. But when we say board games, we're talking about that again. GLENN GIVEN: And that's different from say, I like to use the term tabletop games, but I'll expand that really far. I did a tabletop producing for the Boston Festival Indie Games, which is coming up in September. MACKENZIE CAMERON: We should talk about that. GLENN GIVEN: The game is like a hockey rink filled up with indie games. I don't know much more than that. We just opened the submissions yesterday. AUDIENCE: [INAUDIBLE]. GLENN GIVEN: Well, one of the differences from last year that when I came onboard because it was my first year doing it, I really wanted the stress was that a tabletop game or an analog game is a different term than a board game. A board game is literally something with a board. But a tabletop game, I'll stretch that and say card games are tabletop games. Roll-playing games are tabletop games. You can be playing the clunkiest most grognard DMD-- "grognard" is a French term for curmudgeonly old guys. That's basically it. I think it actually is for generals or something. They're the worst. So Apples to Apples is a tabletop game. What's the game where you're pulling out the sticks and the marbles fall down? Ker-Plunk. God, I love that game. Ker-Plunk is a tabletop game. Dominoes is a tabletop game. Bocce Ball is not a tabletop game, but it's great. But also if roll-playing games or tabletop games, does that mean a LARP is a tabletop game? Because a LARP is just the roll-playing game without dice. And if LARPs are tabletop games, then how about-- MACKENZIE CAMERON: I think they're technically dexterity games. GLENN GIVEN: Well, if you're doing proper LARPs. But if you're doing something like a Nordic LARP, which is the brand of live-action roll playing from Norway, Sweden, and Finland-- MACKENZIE CAMERON: God, those folks leave me nuts. GLENN GIVEN: I was invited to a convention with those people last week, and they are hilarious. MACKENZIE CAMERON: The only one that I know that they do there's a Battlestar Galactica, where they actually they rent out an old-- GLENN GIVEN: Battleship. MACKENZIE CAMERON: --battleship. GLENN GIVEN: For a weekend. MACKENZIE CAMERON: [INAUDIBLE] on it for a weekend, and yeah, they go and they play this game on right on a frigid waters and military bunkers playing Battlestar Galactica-themed live action role playing game. GLENN GIVEN: So it is a roll-playing game, but it's also like an interactive theater performance. That game was crazy, apparently. So is everybody familiar with Battlestar Galactica? OK. So there's robots, and they look like people. So it was a Battlestar Galactica, and then they had people who were cylons who were agents. And then anyone who was wearing red was a hallucination. It could only be seen by specific characters. MACKENZIE CAMERON: It gets better. GLENN GIVEN: From the second day of the LARP, and this is like a three-day thing. You frigging sleep on this battleship and everybody is in character the whole time. On the second day of the LARP, one of the guys who has red who was part of the plot as a hallucination, they brought his twin brother in who nobody else knew that he had a twin brother. So all of a sudden there's two identical hallucinations wandering around the ship, and everybody is flipping their fucking wig. So I guess what I'm saying is, if we can rent a battleship for [INAUDIBLE]. And then like games like Johann Sebastian Joust, which is a digital game, but is it really a digital game? It's really kind of like tag. MACKENZIE CAMERON: No. I wouldn't say-- what is it? Spaceteam. I don't know if you guys know Spaceteam at all. iOS game. You play it by basically you have a user interface. And you press buttons before a timer runs out. Unfortunately, what button you have to press is not on your screen, but on someone else's screen. So they'll tell you to flash the paper screw, and you look on your thing and do I have the paper screw flash? But at the same that is that a video game because there's not really much to it in terms of the digital aspect so much as that it's a timer and you have to press buttons at the right time. You could make an analog version of that. GLENN GIVEN: You can make an analog version of that. AUDIENCE: So that brings us into the funding aspects of things. All these different categories, all these different markets, you can think of. How are you exploring? How are you marketing your games? What are you marketing your games as? What's your [INAUDIBLE]? MACKENZIE CAMERON: Well, certainly-- GLENN GIVEN: I have no idea. MACKENZIE CAMERON: Certainly, when we say board games, we mean like against the box about yeah big, which is, in this instance, I call it the hobby industry, though there's a lot of very nebulous terms of what that falls under. But the hobby industry is separate from the toy industry, which I was actually at Toy Fair in New York. Mostly, I actually just wrote an article on my blog about there is a difference between a game inventor, a game designer, and a game writer, and how it separates out the different industries that are out there. But for myself personally, our kind of industry and where we're hoping that we're seeing the most growth is the feeling that people that are mostly on social media, a lot of people that are really into Kickstarter, and very into thematic games that are both themselves semantically interesting as well as the systems that are within them are fair and balanced and are worthwhile to do. So my strategy is actually to initiate. My Kickstarter is in about six months. And in that time, I'm going to be creating a trans-media storytelling ad campaign that will take the theme of the game, turn it into sort of a story, and then promote that as well as talking about just the general design as a result hoping to engage social media networks that will then generate the needed buzz for when the Kickstarter campaign actually launches. GLENN GIVEN: A lot of fundraising has less to do with what you're making and more to do with what you can conceivably accomplish in marketing. So if you have no budget, you have to rely on certain marketing techniques as opposed to others. A lot of people who have no budget like to think that commercials don't work. That's stupid. Why would anybody do that as a way of kind of convincing themselves that they've made a noble choice? No. They just didn't have the money. Sometimes commercials really work. For instance, as I was touring on my way down here, I relaunched a Google AdWords campaign for our game Slash because now I can fulfill orders for it. Using multiple marketing vectors in order to get people interested in your project is really, really important, especially because the means-- it's a very competitive space. It is an industry that is like a hobby and toy industry was like $6.1 billion last year, which is not insignificant, but it isn't video games. But unlike video games, it is an industry that has grown 11% on average, for the year over year for the past five years. MACKENZIE CAMERON: Before that it was even growing more so than that. GLENN GIVEN: No. It's actually gone up, but it's predicted in 2018, I just got the University of Texas research. I take it real serious. So it's a very fast growing industry because there's space in America for it. It's not as fast growing in Europe because that's already a supersaturated market. But stuff like Kickstarter, Patreon, and Indiegogo, even just going to shows and selling your wares, which is actually really the best way to do it. The way that you get people to come to them is to find the audiences that are already keyed into that. They like the idea of I'm helping a person make a thing. I'm not investing, but I'm kind of like pre-ordering this thing. So you've got to identify what those markets are and find the best ways to get them and convince them to throw money down a hole called Kickstarter. Because not everything on Kickstarter gets funded, and not everything that gets funded on Kickstarter gets done. And then-- MACKENZIE CAMERON: There's some horror stories. GLENN GIVEN: --many of the things that get funded and get done are crap. MACKENZIE CAMERON: [INAUDIBLE] a little bit. Like you said, it's a growing industry. So the hobby end of the growing industry as opposed to the toy industry. So if you're trying to get your games to Hasbro or I guess Parker Brothers, but anything else that ultimately leads back to Hasbro. GLENN GIVEN: Yeah, because Hasbro, [INAUDIBLE]. MACKENZIE CAMERON: But anything you want to get you game into Toys R Us, at that point, that industry is pretty static. So as a result, it's very hard to get known or mentioned at all in that. But because the hobby games industry is growing, there's more and more people that are looking for the ground floor. And as a result, if you can get your foot in, make a little brand for yourself, get your name out there, then when you actually do you start making games, people will recognize you, and you'll be able to kind of market yourself as a brand. Again, that's probably one of the most important things. And the only reason this is possible is because game designers start putting their names on the box. When we talk about designers, let's talk about Reiner Knizia, one of the most prolific game designers. And all of his brands are great, but his name is known in the hobby game industry because the dude designed like 300 games and actually has them all published. And as a result, when you see Reiner Knizia's name on a box, you're going to start to recognize that a little bit. When you're able to generate-- GLENN GIVEN: I mean, they're not that all good. MACKENZIE CAMERON: They're not all good. But Reiner Knizia. I've heard that name before. GLENN GIVEN: I agree. I mean, he created a brand for himself just like video game companies d, or labels in music. The music industry, you guys weren't around in the '90s, so you don't know it exists. For you, still exists. Yeah. So I think the interesting thing about the market and being able to have that personal brand as a designer is really important because the people who are playing board games right now and the growing market are, I guess, they're iconoclastic people. They're hey, we're all individual types, but they're all we're really big on geek culture. And then geek culture is bringing 75,000 people here this weekend, so maybe it's not as marginal as we'd like to think that it is. But it is a market that values individuality and individual productivity. So finding the ways to market to those people directly can often be really rewarding, especially if you're putting your face on what you're doing. MACKENZIE CAMERON: Another example is Daniel Solis, who's down in-- GLENN GIVEN: North Carolina. MACKENZIE CAMERON: North Carolina. For the longest time, I mean, he was a graphic designer, but he started a board game blog. He just talked about his expertise in designing a lot of games. He's a very prolific guy, and he does a lot of stuff that's just completely open source, print-and-play games. GLENN GIVEN: Actually, he works out of his house like I do, and we actually do a conference call every morning with a number of other designers. MACKENZIE CAMERON: I was wondering how you knew him. GLENN GIVEN: He's just released-- you guys ever watch Firefly? That show? So he just did the layout for-- like his normal job is doing graphic design layout. MACKENZIE CAMERON: Where's your other stuff? GLENN GIVEN: So he just laid out the Firefly BG, which is available for download now as PDF and then they're provisioning it later. So he does that, and then he has this whole I'm going to make games a lot and put them up on DriveThruCards, which is an online service that allows you to do print on demand, which is another one of the things that in the past year has really shaken up the board game industry. You used to not be able to get anything done unless you're are ready to do 3,000 copies of it, but now I can make a game, put it up through DriveThruCards, and you can go send $10 and download it. Or they would actually send you individual print in seconds. MACKENZIE CAMERON: But again, some of [INAUDIBLE]. So Daniel Solis has been doing all this stuff, just presenting himself as an expert creating a brand of himself. Actually, I brought him up to Hacks last year as an expert. He, about six months ago, ran a Kickstarter for Bell of the Ball. And he leveraged some of his connections. And the game itself is very good, but the Kickstarter was wildly successful. I think he got something like three times what he asked for just in terms of kick starting the game. And as a result, I believe, a lot of his connections and leveraging his brand as an expert. I think at some point, he's got 5,000 Twitter followers and decent blog presence. And as a result of having that before he launches a Kickstarter of starts to generate pre-sales for a board game, that he's going to do much better. And the effort that he puts into it is really just talking about board games, putting information for free online and establishing himself as someone who's an expert. AUDIENCE: So all of these crowd funding models, there is a little bit of precedence in a publisher [INAUDIBLE] groups like GMP. But for the most part, they're fairly recent developments. because we'll be willing to give money online. I'm assuming that your interactions [INAUDIBLE]. But what is this role of stores, especially hobby stores? MACKENZIE CAMERON: I do you also work at a games store Eureka! Puzzles down Brookline. And that's an interesting function as well. I do a little a little bit of research in general on retail stores, not just board game stores, but anywhere that sells things. It's taking a nosedive as an industry, which is hard to imagine just stores selling things as an industry. But idea is with the internet and [INAUDIBLE] like with Amazon Prime, oftentimes, you can find the same product online. And even with shipping, they can beat the price of any retail store, which is board games, war game retail outlets are pretty hard. Because you can go and hell, I work at Eureka! Puzzles. I get in and play discount, and it's still cheaper for me to go on Funagain Games and buy a game on that end. With shipping, it's still cheaper than what I pay at Eureka! Puzzles for a board game. As a result, a lot of retail outlets are starting to realize that if they're going to survive, they can't do it by trying to beat prices with online stores. GLENN GIVEN: I turned it down. MACKENZIE CAMERON: As a result, they're realizing that they're strength is that they have a physical presence. So their idea is that if they're going to charge more for board games that you can get online, they need to add value to that. GLENN GIVEN: They'll probably do stuff like the Knight Moves Cafe, where they're taking that. So the interesting thing about this-- well, one of the interesting things about this is comic books, right? So comic books, in the '90s were huge, and then they went through this bust, and the people who owned comic book stores started to see the margins on their products get shorter and shorter and shorter and less and less and less people buying comics. So it's a really dire situation. And in the past 10 years, it's become really easy to pirate comics online, but more so than that, there are apps for your phone or your iPad like comiXology or the Marvel app that actually do a really, really good job of presenting comics to you. My wife wouldn't read comics at all before just because sometimes people who don't really know how to do layout end up doing layout for comics. And so literally, it can be confusing to read just because they don't have artistic mastery of how to make someone's eye flow across the page. One of the things that comiXology app does is that you can just zoom in panel to panel to panel to panel to panel, and it's a surprising shift in the way you can read it. Anyway, the important thing is that comic stores were getting a lot less money, and so what they did is they took all their floor space and started to dive heavier into the parts of their business that were generating the money, which was board games and hobby games. And the ones that survived-- because a lot of them have gone out of business. The ones that survived tend to be really, really good. And the reason that they're really, really good is because they realized that they're not in the business of selling you comics. They're in the business of creating a community. I drive an hour to my comic store because I'm an idiot. I could just be buying them online. I drive an hour to my comic store because they have their good operators creating a good community for people to come together and talk about different projects. They're always running events. It's barely a store. It's like a little mini convention that's always slightly happening. And so a large part of that is realizing we can get people in this store for a longer period of time if we're all playing games together, if we're running card game tournaments and board game tournaments. MACKENZIE CAMERON: The interesting part is how do we as designers take that? What's their role to us? And I mean, before it was definitely, you could your game picked up by distributors or to a publisher and a publisher gets picked up by his distributor. And all of a sudden, you're not selling to a direct audience online. You're selling to hundreds of thousands of little mom and pop stores across the nation, the numbers are huge. But now-- GLENN GIVEN: You can still wind up making a lot less. MACKENZIE CAMERON: True. Because your margins are going to be-- GLENN GIVEN: Because your margins are worse. MACKENZIE CAMERON: But the part of the thing you can do now is that board games can really help with the promotion of your game. I know Eureka! Puzzles, Knight Moves are huge online. If you've got a game that you want to show people that you come and teach it to a bunch of people at their store, they're happy to do that because on the one hand, you're adding value to their store which they love and need. So people will, like oh, your new puzzle. They had one guy come in and that was really cool. I learned how he made this game. But then you're getting value out that, and now people know about your game. And then when a store buys games from you, you still get maybe not necessarily the margins you need, but-- GLENN GIVEN: They're still a lot better. So normally in traditional publishing, if I was going to put out a board game 20 years ago, 15 years ago, I would have to find a distributor like Alliance or Diamond or something, and then I would sell my game to them for 20% of the MSRP, which is the manufacturer's suggested retail price. So if I had a game that was $25, I would sell it to the distributor for $5, and then he would turn around and sell it to a store for somewhere in between that $5 and half the MSRP, and that's his cut. And in between, there's all these other people who are taking their cut and taking their cut and taking their cut. And the reason I've made jokes about the music industry model from the '90s is because it's the same fucking model. It's The same thing. There are people who are creating stuff, but because there is no really awesome way for them to get it directly to the people who care about it, they have to go through all these weird side projects, side industries or middleman industries. Only a few people who are tapped by very, very powerful companies got to be famous in the '90s. And then the internet came and destroyed all music, even though music is making more than it ever has and all these other things. But what was rediscovered in that shift, and this is something that's happening in board games right now, is that people want the direct audience to create a connection. If you could spend $20 on a CD because they used to be $20 or $20 on a ticket to go see a show, of a band that you enjoy, even though you don't get to keep that show forever, come on, it's so much more enjoyable. Because you could get that CD in a million different ways from a million different locations. It's really easy, but you're getting an experience with the other thing. And you're getting to know that you're directly supporting the creator, which is something that a lot of audience members are really invested in. And as someone who is a creator, I'm very invested in that. But also it's financially beneficial. So for me, for instance, I can make a short-run print product for $3 and sell it for $20 and then cook the mailing price into that and get some-- I've actually made a reasonably good profit on it. But if I make it for $3 and want to sell for $20, I end up having to sell it to a distributor still for like $4 or $5. And then in order for it to be worth my time, I really need to make a whole lot of them. And the distributors probably only want it if I can make a whole lot of them. So now I've got find a way to make 1,000 of this thing, which, in board games, is a ridiculous number of game to have. Unbelievably large number. But unfortunately, you can't get it affordable unless it's above that number. MACKENZIE CAMERON: Right. And that's the front end of financing too. But if you think about Bell Publishing and stuff, OK, to make it, I'll need 1,000 copies. And you'll be able to get a decent budget and figure out how to use it. All right. That's fine. But then when you do things like, OK, I'm just going to keep these in my house until I sell them all, you don't realize how much space 1,000 board games takes up until you see it. GLENN GIVEN: Oh, actually, I have a video because I have 2,500 in my garage right now. And it takes up exactly a car's worth. MACKENZIE CAMERON: You have $2,500? GLENN GIVEN: 2,500 copies-- 25,000. [LAUGHTER] MACKENZIE CAMERON: It's a card game? GLENN GIVEN: It's a card game. It is this card game. And I've got 2,500 of them in my garage and then some more of them down at the port [INAUDIBLE]. MACKENZIE CAMERON: We should bring up-- I don't think you ever said it. There is some benefit to going through traditional publishing. GLENN GIVEN: It's right for some people. It's not right for-- MACKENZIE CAMERON: And the reason I would suggest that is if you get a really good game idea and you pitch it to a publisher and they really like it, there's so much more to that than just being like take this game. You love your game. Well, take it. GLENN GIVEN: And here's money. Forever. MACKENZIE CAMERON: If you create an idea and you sell it to a publisher, you're done. GLENN GIVEN: And it's not your idea anymore. MACKENZIE CAMERON: And it's not your idea anymore. But all of a sudden, with self-publishing, I mean, the amount of effort to make a game fly is huge. It's massive, especially if you're doing it on your own. But if you sell it to a publisher, you're not going to make as much money, but there comes a point where you're not doing anything and you're still getting paid it. GLENN GIVEN: Yeah. 100% of a small number or 5% of a big number. MACKENZIE CAMERON: Yeah. GLENN GIVEN: And it's a question of what are you in it for? And what does your time look like? For me, I make games and I chose to leave a life that was not making me happy but was making me lots of money for a life that is making me significantly less money but is making me really happy, because I get to create things and they're fun. MACKENZIE CAMERON: You seem like a really happy person. GLENN GIVEN: It's the medication. Also, I don't work in an office so well. So it fulfills that artistic or productive drive. And that's not something to be ignored. Total up. [INAUDIBLE] MACKENZIE CAMERON: The major point is that it is you want to make your career making games, do yourself and you're really big. If you want to put a few games out there and do a lot of other things with your life, like once a month sending out all of your games to every publisher you've got email for, you don't have to make it your life. GLENN GIVEN: Yeah. It's like if you were going to start a band, is the band your life, or is it what you do for fun with some of your friends? Neither is an illegitimate way of doing it. They're just right for different people. PROFESSOR: We have more than five minutes. So one question. MACKENZIE CAMERON: Really? We've have time for two hours? GLENN GIVEN: I can do that. AUDIENCE: [INAUDIBLE]. Self publishing is undertaking that [INAUDIBLE]. Learning curve, so to speak. And you guys have your fingers on the pulse of the community, and you know what uP and coming designers have and the skill sets. Have you considered publishing other people's games? GLENN GIVEN: I have considered it, and I've been approached to do it, but I've said no to it. Because I worked in publishing for a long time, and I am ideologically opposed to that model. That being said, one of the things that I do I do this on a consulting basis and I also do this just through our website and the fact that all games are Creative Commons licensed and the whole process-- Frank, can you turn that down? --is teach people how to do it. Because it is an undertaking but it's nowhere near as hard as you would think. The kind of effort you would put into studying for a class is the kind of effort you need to put into figure out how to deal with Amazon fulfillment services, or what does it take to, oh, god, figuring out shipping. If you ever start at Kickstarter for anything, figure out the damn shipping. That will kill you. I had a friend of mine who made this really cool game called Mobile Frame Zero, Rapid Attack, which is a robot fighting game where you make LEGO robots. MACKENZIE CAMERON: Oh, yeah. It's on [INAUDIBLE]. GLENN GIVEN: Yeah. So they're doing-- and some of it's new on this weekend, which is like a rockets in space or something. Anyway, he raised $85,000 on Kickstarter to publish what was just a book and mail it to people. But because he had not-- and he's a smart person-- because he had not figured out the shipping costs, he ended up $25,000 in the hole. Yeah, because you didn't go to the United States Postal Service and go, what are your rates? MACKENZIE CAMERON: So $85,000 and it cost him roughly $100,000 just to make it. GLENN GIVEN: So that is what a publisher does for you. They figure don't even frigging worry about it. We're take 90% of whatever this game makes, 95% of whatever this game makes, but you don't have to worry about any of that crap. And so for some people that's really-- MACKENZIE CAMERON: What you're actually finding a lot is that there's tons of small publishers and these are actually starting-- I mean, Game Salute is one, Mayday Game, I don't like that one as much. Anyway, Mayday, they do-- GLENN GIVEN: Like my neighbor. MACKENZIE CAMERON: [INAUDIBLE], which you might have seen on TV Cops. GLENN GIVEN: Yeah, that's Dave Chalker. MACKENZIE CAMERON: Yeah. GLENN GIVEN: Yeah, he's a really good guy. MACKENZIE CAMERON: But lots of small companies that-- it was a designer. He had one games. He's like I'm going to do it myself and try and make this work. [INAUDIBLE] was another one. There's a lot of these little-- GLENN GIVEN: [INAUDIBLE] Games just started like that. Now they have whole conventions [INAUDIBLE]. MACKENZIE CAMERON: They start with an idea, they do it themselves, and hey, it works out. And then from there, they realize, hey, we should do a lot more of these because that was successful, and if we're successfuyl at it, we can make it work. And after the one designer takes and build the company. It becomes three or four people, and it becomes mildly successful, and they push out all the games that they've felt they can. I made all the games that I want to setting out. GLENN GIVEN: Or they've used all their hours in the day and they just can't do it anymore. MACKENZIE CAMERON: Well then once that happens, a lot of publishers will then start to look for other designers and publish their games. Because you're designing your games, you've got your A idea, your B idea, and your C idea. these are all great. And once you publish all of those, you start to realize maybe those other ideas that you have are good, but maybe you'd be more successful if you could pick out some other ideas that other people submitted to you. Really, that's actually better than what I would do myself. GLENN GIVEN: Or maybe a lot of times what will happen is as a designer you do have a lot of ideas and if you're a smart [INAUDIBLE], you realize that most of them are probably not good. But that's fine. That's absolutely fine. Nothing springs forth from someone's head like Athena perfectly formed. That is not how things are made. All of your favorite musicians practiced forever to get really good. It's just the way it is. In game design, it's the same thing. You have to make a lot of really crap. There's a really good phrase about writing which is that every writer has 2,000 bad pages in them, and the good writers are the ones who got the 2,000 out somewhere else. So you've got to keep that in mind. But I think that the ability to find publishers is not diminishing, but the ability to self-publish is rising, especially, although we have great self-publishing tools now and the internet is really good for sharing things, the biggest thing that's happened in the past year and going into this year is 3D printing. So while we've hit the head into what self-publishing is, we're just beginning to see what desktop manufacturing is, like people who are just running their own little factories in their houses, which is super cool. MACKENZIE CAMERON: Actually doing that for my game. GLENN GIVEN: Yeah. That's so cool. I really want to do that so much. MACKENZIE CAMERON: I think we have time for a question. AUDIENCE: [INAUDIBLE]? GLENN GIVEN: It depends on the scale. AUDIENCE: [INAUDIBLE]? GLENN GIVEN: Yes. This was printed in China. Like I said, just got off the boat. So I worked with a printer on this because I've been doing printing and publishing for so long, I know the jargon and know the lingo. Again, another thing that a publisher can be really good for is that you don't need to study to figure out all of the things. They have made all those mistakes in the past for you. MACKENZIE CAMERON: You don't need to screw up your bleed lines, your print cards and all of the-- they're all offset. GLENN GIVEN: If you've got the expertise or you can organize the expertise amongst the people you know, you can get past a lot of it. But think about it, everyone who's printing stuff or making things, they want you to use their services. Because they are a service industry, they're going to make it as easy as possible. And if they don't make it easy, there's someone like four blocks down that is probably going to do it because yay, capitalism. There is, I think, god, one [INAUDIBLE]. AUDIENCE: [INAUDIBLE] like buy a printer? GLENN GIVEN: No, it is. It's actually a huge expense to buy a printer. The problem with printers is that they're super expensive to have and not use. AUDIENCE: It's better to have a team run it. GLENN GIVEN: Oh, yeah. You want that machine running losing money because you'll lose less money than if it wasn't running at all. MACKENZIE CAMERON: There are a decent number of American manufacturers, certainly for prototype-level materials. GLENN GIVEN: And in Massachusetts, like [INAUDIBLE] of Massachusetts. MACKENZIE CAMERON: Specifically, Massachusetts, there's a lot of stuff. I would say from what effort, nothing will beat China in terms of cost. GLENN GIVEN: It depends on the exact scope of your project and the number of units that you're doing. So if you're doing a few units of a very complicated thing, you could actually probably get good prices in America. If you're doing a lot of units of something relatively simple, you're right. You can't beat it. MACKENZIE CAMERON: Right. And then oftentimes, some people will do piece mail so they'll get some printers for some things like the boxes, the rule books. And you can pull that together, and that can be a nightmare. GLENN GIVEN: It can be. But there are specifically services that you can hire. They're called pick and pack warehouses where let's say I had a board game and it had components coming from the four corners of the earth. I could get them all shipped to one place and then pay a person a quarter per unit to pack it all together and whatever. And if I filled out my spreadsheet correctly, I will realize whether that is a financially viable thing or not. So one of the things that I like about self-manufacturing is that I can figure that stuff out as I'm doing it. So for instance, on early prototypes of Slash, I was printing all the cards and cutting all the cards and killing myself because it's a lot of cards to cut. And so it digs back into the design. Well, if I change the size of the card, I can use this machine to cut it, and it's going to save me a half hour on every single unit. MACKENZIE CAMERON: And that's a whole other thing is design around like-- GLENN GIVEN: Your typical capabilities. MACKENZIE CAMERON: Which you'll actually see the rise of super small games. GLENN GIVEN: Micro games. Because they take a lot less to make. MACKENZIE CAMERON: Yeah. The biggest game you can create using 10 wooden chips and miniature dice and fit into a package about this big. GLENN GIVEN: So there's that threshold. So one of the things I have with this is that when you do printing, you usually have one big, let's say for cards, one big piece of paper with a bunch of different cards on it then you cut it all out. Now, if I'm making 24 copies of the game, it just so happens that I can fit 24 cards on one sheet of paper, which means instead of printing card A, B, C, D, E F, G, H, I, J, K, L, M, N, O, P, Q, R, S, whatever on that one page and then cutting it out and then sorting it, I can print A 24 times and then do that 400 times in a row, stack them all together, chop it, chop it, chop it, chop it, chop it, chop it, chop it, chop it, chop it, and I don't have to sort me thing anymore. I have cut hours off of that pick and pack time. And now multiply by the number that I just got out of it. Instead of making one game every half hour, I'm making 12 games every 15 minutes or something. MACKENZIE CAMERON: I just want to go back again to the China versus the States as well as I'm finding a lot of kickstarters. Realize that the margins they need for small amounts, they have to go to China if they're going to break even. But if they exceed goals or go much and they can do larger quantities produced in the States, then they use the same margins. So a lot of people will, if they can get a game big enough, will try and have a stretch goal, bring games to the United States if they can sell enough of them. GLENN GIVEN: This is a challenge that is unique to analog games, obviously. You don't need to worry about where you're getting your cards cut, I guess unless you're making some kind of Pokemon AR game. That would be super cool. How do they have that? They've got to have that, right? Do they actually interact with each other? They should. Because last year they did a kickstarter for Goal Marcana, which is like an app and a miniatures war games where your dude has a QR code on it, and then it uses the app to do all the combat. AUDIENCE: There's a [INAUDIBLE] tradition of Japanese arcade games that reads cards probability. GLENN GIVEN: Oh, god. I remember that. MACKENZIE CAMERON: Mario Kart of our games, I've got one. That's great. PROFESSOR: [INAUDIBLE] AUDIENCE: [INAUDIBLE]? PROFESSOR: Yeah. [INAUDIBLE] We're going to play until. Thanks for coming. One second. Thanks again for everybody. So we're going to spend the last two hours of class. Well, first we're going to have to take a break, check in with y'all. Did you do that already with games? We'll check in with y'all to see how you've formed into teams yet for that project. And then we're going to be playing-- do you have a couple of your games? GLENN GIVEN: I brought I think eight different games that I've made. PROFESSOR: Well, great. So we're going to play some of those games, and we're all going to play, we've got three games that are related to the assignment that we've talked about. MACKENZIE CAMERON: Also if anyone is interested, I'm going to go ahead and leave some of business cards up here. As I go through the process of making my game through Kickstarter, I'm basically going to be posting my experiences every step of the way from design to manufacturer. So if you want to watch someone struggle immensely through the process and watch the mistakes that I've made and laugh immensely. It's going to be hilarious. Just grab a business card. If you're interested in playing Croquet and also murdering each other, this is definitely the game for that. GLENN GIVEN: And so just to show you guys, these are the three phases of prototypes. Well, these are the two phases of prototypes that I went through. MACKENZIE CAMERON: They each have VHS cassettes. GLENN GIVEN: Yeah. And so here's the trick. The reason that it's in a VHS cassette is because this will fit into the smallest flat rate priority mailer. MACKENZIE CAMERON: Oh, really? GLENN GIVEN: I can get any game that fits in this size anywhere in America for $5 in two days. And you can't beat that. And so if I wanted to get a lot of these games out and not get a second mortgage on my house because I'm not an idiot, I designed that. So I actually do that with all of my monthly games. MACKENZIE CAMERON: And the cases are probably-- GLENN GIVEN: The case are dirt cheap. They're like $0.08 each. MACKENZIE CAMERON: Yeah, because nobody uses them anymore. GLENN GIVEN: Yeah. No one needs it. So I found a place that is still selling them. So supply chain stuff. AUDIENCE: So [INAUDIBLE] GLENN GIVEN: No, I did not. I don't know Maybe I did. My artist is supposed to be getting it to me today. PROFESSOR: Thanks for coming, guys. What are you guys doing? GLENN GIVEN: Oh, thanks. That was great. AUDIENCE: [INAUDIBLE]
|
MIT_CMS608_Game_Design_Spring_2014
|
14_Adding_and_Cutting_Mechanics.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR 1: So, yeah. So the publisher can demand, you know, I want to see this feature. EA demands that this game works with Origin. Ubisoft demands that U-- is it Uplay? Is that what-- that their universal sign-up system gets added on. Things like that, right? That happens all the time. Other stuff that's maybe money-related or market-related? Sometimes expectations just-- AUDIENCE: This feature in the past didn't sell very well, so you should add it to every-- PROFESSOR 1: Yeah. That usually is similar to the publisher thing. Someone's trying to push that. But somebody could come internally from the development team, right? They look at competing products and say, hey, this thing seems to be doing well. You know, let's add hacks. Because hacks seem to be doing well for some games. So we can add hacks to our game. It's not going to necessarily change the way how the game functions, but it's still a new feature. It's still a new-- you've still got to come up with mechanics to unlock new hacks, and equip new hacks, and make sure they animate correctly, and things like that. Even if necessarily the gameplay decision making you make in the game doesn't change. Of course, hacks actually do change the way some games work. The changes in marketplace in general. And sometimes they're making a sequel to a game, or sometimes you're adding number two, or number three, at the end of the title. And your players are going to expect something new. And so there's some pressure for you to sort of add new mechanics instead of simply, say, iterating on the old ones. Why would you cut mechanics? Not necessarily cutting features, but cutting mechanics-- game mechanics. There's one good reason I've been drilling in to you all semester long. Yeah. AUDIENCE: Simplify gameplay and decrease the learning curve? Or decrease whatever-- PROFESSOR 1: Decrease learning curve is a little bit separate from simplify gameplay. Both good points. Decreasing the learning curve is like-- testers just don't get how this game functions. We've got too much stuff going on simultaneously. You could try to tune those mechanics to make it a little bit more legible, or you could just try taking the mechanics completely out so that people can actually learn it. Simplifying the game-- could be that, but it also could be just trying to make what's core about the game more obvious. You know, this game is really about generations and generations of heroes, you know? That is a game, right? Hero Generations? Yeah. AUDIENCE: Yeah. PROFESSOR 1: Yeah. And you've got, say, some sort of very elaborate weapons crafting mechanic or something in there. Well, the weapons crafting mechanic may not be really what your game's about. So you might want to take that out because people sort of forget about what the core experience of the game is supposed to be. So focusing back on the core of the game. What else? Say your game-- oh, yeah. AUDIENCE: New technologies come out? PROFESSOR 1: Oh. New technologies come out-- like what, physics? You know-- maybe? AUDIENCE: Yeah. Or has something to do with things like-- well, they mentioned-- well, I don't know. 3D, for example. 3D comes out it changed-- PROFESSOR 1: Well, that's true, right? I'm trying to think of a game where that's a really obvious example. Can of think of a game mechanic that substantially changed when it went from 2D to 3D? I'm thinking Mario, but-- AUDIENCE: Mario-- AUDIENCE: Movement, because now, instead of moving left, right, up, down-- PROFESSOR 1: Just straight-up movement. Well, the way how the camera responds to movement as well-- that's a big one, right? In a 2D game, say a side-scrolling game, like Metroid or something like that, you don't have this situation where the place where you're trying to go to is obscured, unless it's actually physically blocked by something that you have to destroy or open up or something. But in a 3D space, that happens all the time. You know, you've just got bits of background scenery getting in the way of the camera. Mario just changed the way jumps work entirely, when it went from the 2D Marios to Super Mario 64. And every iteration on top of that has added new ways to jump-- double jumps, triple jumps, jumping off the wall, and stuff like that-- now, the latest Mario games expect that you know all of that. Which is why I can't play Mario anymore, because I didn't go through that process. Although the games are not that hard, at least the first couple of levels are not that hard. Let's see. What else? There's one that I've been always mentioning, this entire semester. It-- hmm? AUDIENCE: Maybe just to try something. PROFESSOR 1: Just to try something? AUDIENCE: Yeah. PROFESSOR 1: Oh, yeah. Yeah, just to see how it's going to change how the gameplay works, right? You know, let's take something out and see whether that was actually essential to the experience or maybe getting in the way of the experience. You don't know. You know, and the emergent results of taking out a mechanic might actually end up as a more interesting experience. One thing that the book does mention is to take stuff out that might actually change the ESRB rating of the game, or the PEGI rating if you're selling something in the UK or in Europe. So for instance, if you're building on top of a game engine like Unreal or something like that, which has body location damage-- and then you shoot a head, the game knows that you shot the head. But maybe you don't want your game to be that specific. Maybe you're designing a game for kids or something like that, and it's like, well, rewarding kids for head-shotting each other is maybe not the feel that we want to go for. It might actually push us out of the E for everyone and into the-- what's the next rating, T? AUDIENCE: And E-- PROFESSOR 1: Yeah. There's like an E-10 rating. At least there used to be. OK. And sometimes, you've just got to cut mechanics because you don't have time to do them. They're great ideas. They would actually make your game better, if you had time to be able to polish them and make them understandable and smooth and everything. You just ran out of time, and you just want to be able to ship. So that's a perfectly legitimate consideration. So that's this process of, like, generating ideas. Say, this is like an x or something, the number of ideas that you've got in your game. And mechanics in terms of either rules, and in a computer game, there'll be code. In these games, that's just the rules, it's pieces, it's boards, it's everything that you actually have to make your game work. And you kind of go through this brainstorming phase where you get a ton of ideas, right? You've got some good ones up here. You've got some crappy ones down there. Actually, it's probably more like that. A whole bunch of crappy ideas. And you've got a couple of good ideas. But yeah, there's a big explosion of ideas, where ideas turn into experiments. Some of them are thought experiments, that you're thinking-- just discussing them among your team-- what if we did this, what if we did that, how would you solve that problem. They may be real, testable experiments where it's something that you can play, something that you whip up within an hour or half an hour, and you actually put it in front-- we've been doing that all semester. Some of the ideas may just come up off different ideas interacting-- and just, what if we had dogs in our game? What if we had solar power in the game? What if we had solar-powered dogs in our game? And things like that. You know, and then ideas just happen. However, it's important-- so far I've sort of been describing the brainstorming process that you've all been through already. But one thing that we haven't necessarily asked you to do nearly as much is to come up with actual discrete experiments. You know, it's like, here's an idea. Actually come up with a criteria of success that you're looking for. How do we know whether this particular idea, this particular experiment version of the idea is succeeding or not. And you're going to need that. Because you're going to need to cull. You're going to need to like reduce this back down, try to keep this, or maybe even build this up a little bit. Actually, it's probably more like, you're going to cull a little bit. Then you want to bring this up again. And then each time this goes, you have kind of an increase in that you'll have an increase in bad ideas, but a decrease of bad ideas. But your total number of good ideas just keeps going up. And by good, I'm using a very vague term. It could mean polished. It could mean actually good for game play. It could mean something that's been proven in a marketplace, something desirable that you want to keep in your game. PROFESSOR 2: Things that could be-- like, it should be good for the core experience, but one of the problems could be, they are just-- they're good working mechanics, but actually don't help the core experience, or detract from the core experience. PROFESSOR 1: Mm-hmm. Yep. Let's see. There are a couple of dangers, even in this early brainstorming process. So I'm just looking at this whole part here. One is that you pretty much already think you already know what game you're going to make. You've got a single solution. Say you're taking the computer game version of this class and you decide that you're going to make an RPG. And your model is Final Fantasy 6. OK, so everything is going to basically follow the Final Fantasy 6 template. So you're basically pursuing one single solution. And the problem with that is that your implementation ability of actually making all of those things fit that template is already-- you are not Square. Right? You don't have all of the resources that Square had, even back when Final Fantasy 6 came out. How old is Final Fantasy 6? 1994? 2? AUDIENCE: I don't know. PROFESSOR 1: Something like that. I mean, even then, they were a large team with many years to work on-- at least a year to work on that game. I don't know how long they actually worked on that game. There's a very, very good chance that your ability to actually even implement something that you know how it already works may not be able to meet that. So you will have a failed experiment and then you're kind of stuck. You don't know what to do. So you're going to have to do lots and lots of experiments. But experiments are expensive. Everything takes time. Everything is going to need culling at some point in time. But you need a certain amount of time that isn't going to end up going into the game-- this chunk. Actually, it's agood chunk. Which is why we've been trying to teach you how to prototype really, really quickly. It's a little bit of [INAUDIBLE]. Because we want you to try to be able to get on to experiments, come to a conclusion about whether that idea was good or not as fast as possible, so that you kind of reduce the cost of doing a single experiment. And yes. It's true that you're not going to be able to conduct an experiment on every single idea that you've got. So you have to kind of pick and choose a select number of ideas to be able to take the experiment to the experimental stage. Now, there are a couple of things that you can look out for to say, maybe we don't want to conduct an experiment on something. One is a brittle idea. And a brittle idea is that you kind of have to kind of twist it really, really, really hard to be able to make it fit with any idea that you've already got. I'm assuming that you've already got a concept that your team was kind of building around this concept. Somebody comes up with an idea and a huge amount of twisting and turning to be able to even make it fit close, even though you think that on its own, this idea is a really good idea. You might actually want to see how many different ways could this particular idea be executed before you actually decide on the way-- on an experiment to conduct on that idea. Two ways that you can do that. One is that you just specify, what is the goal of this possibly brittle idea? It takes a lot of twisting. Why do we need this idea in the first place? Is it because the challenge in the game is too low? Is it because players have too much information or not enough information about what they're supposed to do? Is it because you want to introduce a little element of chance or maybe give some sort of strategic decision making? Try to identify, what is this problem that you're trying to solve in the first place with this kind of unwieldy idea? All right? And once you've been able to identify that, then you come up with, all right, what are all the different ways that we could address that problem. As opposed to, here's this one solution that's kind of unwieldy, that might very well solve that problem, but maybe there are alternatives. Has anyone encountered this in your own prototyping? Maybe in this assignment, maybe in a previous assignment. Like, somebody had an idea that was just kind of hard to use, but then what the idea was trying to solve was actually a good problem to solve? No? Not yet? I thought I saw some nods. If you've encountered it, I'd love to get an example from one of you. I can throw out examples. But I'd love to be able to hear examples from one of you. No? AUDIENCE: I guess in our first game, we realized a couple problems with the game. So we came up with various ways of trying to resolve it, that didn't really work for the game itself, but solved the smaller problem. PROFESSOR 1: Can you be more specific about what the problem was? AUDIENCE: Yeah, I'm trying to think back to specific ones. One of them was that the way our game was with the set of the tic tac toe, it was really hard to see what sets were available on the board. PROFESSOR 1: Right. This was the set assembly game. Sure. AUDIENCE: Yeah. And so one of the things we tried to do was removing the center tile altogether, so that you could only play around the edges. Which did solve the problem of, now you could really see what everything was, so it was a lot easier in that sense. Except it was just a lot less interesting of a game, because there was a lot less strategy to it. PROFESSOR 1: So you had a version of the game where all the squares were in play, including the center tile, and it just made it confusing for your players. And then you have a version of the game where there's just, like-- this one possible solution to this problem is just take the center tile out. But that made it trivial, or-- yeah. AUDIENCE: So we tried to find a middle ground. And that's how we ended up with the idea where we had one single card in the middle. Because you could still use it, but you didn't have to worry about the lines of play going through the middle, and how incredibly complex it is to try and reason about that. PROFESSOR 1: OK. So it sounds like, instead of a brittle idea, you actually had a good, robust idea. Which is, that center square is really problematic. It has hope-- it has possibilities. It has really interesting gameplay possibilities. But for learning the game and starting up the game, it's problematic. So you found a way to say, all right. How do we deal with the center square, instead of the solution of just delete the center square. You know, let's find a different way to work it in the game. And your final game actually does that. Something else that you can do is, if you've got an idea for your game that's kind of unwieldy, and you're not quite sure what to do about it-- discuss it among your team, but don't experiment on it. Don't work it into your rules. Don't try to implement it. Just get into everybody's heads. Because that idea may come bubbling up later. And little bits of it might be applicable and slightly easier to implement as you make progress in your game. So in other words, put it on the back burner. But make sure you have actually had a chance to discuss it, so that everyone's kind of thinking about it, even if it's not something that you've decided that you actually want to do right away. It might be a lot faster to just have a quick discussion in that case than to conduct the experiment. Yep. So those are a couple of strategies on how do you sort of focus your experimenting time. And this is probably not going to be very useful for assignment two, but it is going to be useful for assignment three. Because in assignment two, hopefully you're well past the creating and adding new features part and now you're just trying to get this out of the door for Wednesday. So let's talk about this stage now. Well, yeah. In fact, I'm going to focus on this one instead. The sort of, like, cutting phase. Where you're both cutting good stuff and bad stuff. But you're cutting stuff in general, because you have to. Maybe because you're running out of time. Maybe because you are trying to get rid of the bad stuff. So, things that you need to do-- and this is something that I'd like you to do today, when you meet up in your teams-- is to determine what is the criteria for which you're going to cut a feature. For instance, come up with a quick description about what your game is supposed to be all about. You may have already written this in your rules. It might be in the very first paragraph of your rules, saying that our game is about getting entangled with other people, you know, or pushing them over, for instance. Which is it? And being able to identify it. And then, when you look at all of your rules in your game, you can actually determine, OK, which chunks of our mechanics actually serve that goal, and which ones don't. You know, there are things that are there just to keep your game going. Those may be fine to keep, but those aren't directly contributing to how good your game is. Those are just basically there to keep the thing working. And maybe they can take a little bit of improvement, but they're kind of suspect. And there's things that'll directly contribute to your core gameplay that you're trying to achieve with your game. And those are probably things that you'll want to spend a lot more time making sure that the rules are absolutely clear, and your players understand what they mean, that if there's any other mechanics in your game, they're supporting that-- you know, that those meet your criteria for what should stay in your game. Because once you know what your criteria is, it becomes a lot easier to be able to just look at everything that you've got, deciding what to keep and invest more time in, and what to cut. Now, there's a couple of problems that people typically run into when it comes to cutting stuff out. A couple of game developers refer to this as culling. If you come from 3D graphics, culling is a very common way just to be able to just cut all the stuff out that you don't need. So, for instance, if you don't have an explicit culling criteria, then you don't actually know why a certain mechanic exists, and you don't actually know whether you should keep it or not. So that discussion should be something that happens today. In the end, what is your game about, and does your game actually meet that? You might have other culling criteria that aren't that. Does this game run in five minutes. What was the limit for this one? 10 minutes, I believe? PROFESSOR 2: 20. PROFESSOR 1: 20 minutes? OK. So if the game needs to run in 20 minutes, because we as instructors gave you that limitation, then that's probably a perfectly reasonable culling criteria for you to use. What in our game is sort of contributing towards this, and what's actually getting in our way of hitting that target. Let's see. Something else that comes into play, and that affects culling, that affects this phase, is when you conduct experiments here, but your experiments are not very tangible. They're kind of floaty and wavy. This usually happens in teams that like to talk out all of the ideas, but don't actually like to do a lot of hardcore prototyping and playtesting. I hope that that's not so much of a problem for anyone in this class by now. You've got in a lot of practice in that. But you may have a couple of rules and mechanics that have ended up in your game, that you never actually prototyped, that you've never actually tested out with real people. Make sure that whatever actually ends up in the version that you hand in on Wednesday is something that you've actually tested-- is something that you've actually gotten some real data about whether it's working or not. This one happens with every student project. Students-- game developers in general always assume that adding more stuff is better. It takes a very, very, very seasoned game designer to realize that sometimes simplicity is good. And again, I hope this is not a terribly new revelation to the people in this class. But I see it in student projects all the time. I see in student projects where they assume that more features and more complexity is going to be better. And in fact, I saw some of them in assignment one. And we're going to be talking a little bit about that. More complexity doesn't necessarily always make for a better game. It may make it a more interesting intellectual problem. It doesn't necessarily always make it into a more entertaining game. Let's see. One thing about culling is that cutting features and cutting stuff out kind of takes practice. You've got to do it fairly regularly. For assignment three, which is going to be starting on Wednesday, right-- we actually cut that on Wednesday-- what I'm going to ask you to do is try cutting a feature every week. You know, just say, what's on the chopping block this week? Every Monday, or every Wednesday, or if you have a regular meeting time on Saturday or something like that-- just, like, what is the least useful thing that we've got in our game right now. And just try that. Because you can always add stuff in later. Remember, this is a cyclical thing. You're going to be adding, then cutting, adding and cutting, adding and cutting. I think you've got in a lot of practice on the adding part. But you've got to practice a little bit more on the cutting part now. And you've got time to be able to do this for assignment three. You've got a little bit more time on assignment three than you had on assignment two. So, start setting yourself a schedule, of all right, we're definitely going to cut something. I don't know what it is, but we're going to meet up as a team and we're going to decide that. And if it turns out that it was the wrong thing to cut, you can add it back later. But that allows you to focus your efforts on the things that you've deemed as more important for a short amount of time. It can be hard to be objective about this process. Something that we've done-- in mostly our video game development, but you can do this in paper prototypes-- is to hand people very, very simple survey forms that basically say, how fun did you find this? How fun did you find that? Say your game has different roles. You have a tinker, tailor, soldier, spy. These are your four roles in your game. And you just hand out the survey form-- how fun did you find being a tinker? How fun do you find being a tailor? Just one to five. And that isn't terribly much information, but it gives you just enough information to help you concentrate your efforts on what's the most problematic thing in our game right now. You can go into greater details, like do you think that the game was too long, too short, too challenging, too easy, too luck-based, required too much thought. You can put these things on these five-point scales to be able to give you a little bit of feedback. They don't necessarily tell you what the problem is. But it tells you where the problem might lie. And that will help focus your discussion and your efforts a little bit. Yeah. And finally, the most important thing is that you've got to keep doing this over and over and over again. If you just do this once, you kind of end up in this situation where you've got a bunch of good ideas but the bad ideas are still there. I'm going to expand this for you. What you eventually want to get to is a situation where the only stuff that's left in your game is the stuff that is worth keeping and that works well. And all the stuff that was bad, you've either completely cut it out or you've improved it to the point where it is actually a desirable feature. And you're only going to be able to get through that through multiple iterations. You're not going to get through that just by introducing something at the last minute and hoping that it works. Because you haven't had time to actually improve it and polish it yet. Now something that's less of a problem for this class, but may be a problem in your other game design experiments outside of class, is that you may not know when to call it a day. You can repeat this process forever. And you've heard about games that have been going on for four or five years. How long was Duke Nukem Forever in production for-- 15? At least 10. And that was actually one example of where new technology just ended up changing the foundation of which the game was built on, over and over and over again. It just kept resetting the game engine development and just took forever. One thing that that game could probably have used is just some sort of standard criteria of saying, once we hit this criteria, we're shipping this game. It could be, once we've spent x amount of money. That probably should've been the case for that particular game. But it could have been, once our informal internal surveys hit a rating of four out of five. You know, that might have been good enough. Some companies don't settle for that. There are companies that say, we're only going to release the absolute best games that we absolutely can. But, don't kid yourself. Internally, they do have criteria on when these games have to ship. Even Nintendo does. And Nintendo's notorious for shipping late games. Because they have this mantra that a game is late for a short while and a bad game is bad forever. So that's what they say to the public. Internally, they ship by Christmas. They get those games out to the market. Because they have a time box. That's the other way to do it. And so it's like, well, we have to be able to get into the manufacturing process to print those discs-- less of an issue nowadays. But Nintendo still operates in disc fashion because they grew up as cartridge manufacturers. So they actually had to print it. Now when it comes to publishing a paper game, that timeline can be even longer, right? Because you have to put ink on paper, you have to have it folded, you have to get all the plastic and stuff put in plastic bags, and put it into the box, and then put into pallets, and then shipped from China, usually, over to stores all over the United States so that they're in stores in time for Christmas. And that's, like, a huge timeline. So that became the time limit. Sometimes you just set the time limit because you have to be able to meet ship dates. It's not very complicated. Now, when you're working on your own internal Kickstarter project or something, where you already know who's going to buy your game, because they paid for it upfront, before you even started manufacturing-- I think we'll hear more about that maybe in about two weeks, or about one week. April 9? PROFESSOR 2: 9th. PROFESSOR 1: Yeah. We'll have a couple of folks who actually work in the board game publishing industry talk a little bit about how they set goalposts and deadlines for themselves, what are the considerations for that. But in the end, it's just-- they had to set a goal, somehow. Either it's a quality goal or it's a time goal. And once you hit that, you have to ship. Because you can always do-- if you have set a quality goal, you can always do better than that. The more time that you spend, theoretically, if you are sticking to this, the higher quality your game is going to get. But then you don't ship. You just have diminishing returns. You'll never be able to move on to your next project. You're never going to get revenue from the project that you started. And your game may not be improved by that much more, just by spending extra time on it. OK. So again, we're going to talk a little bit about things that are relevant to this from what we've observed from assignment one. And then for your team time today, start talking about, what is your culling criteria. What are you trying to achieve-- what's the simplest thing that you're trying to achieve with this game, and the simplest way to describe it. And then look at all the things in your game based on that criteria. And whatever doesn't meet that might be something that you want to cut out, especially if it's already causing you problems. And maybe you'll just save some time-- free up a little bit more time to work on the stuff that's already working, and you can polish it up a little bit more. So any questions about adding and cutting features? OK. PROFESSOR 2: So, taking all this and relating it to assignment one-- and then going into assignment two-- also starting with when you cut in assignment one. So, when it comes to advanced rules and basics rules, if you're going to do an advance and a basic rule set, make sure your basic rule set matches the core experience you're trying to make with the game. It's really, really important for assignment two, for this third assignment. Because that's what we said when set out, right, is we said, come up with something that's going to have a very particular experience that you want the players have at the end. So if you're going to do advanced rules and basic rules, that criteria should be part of what your basic rules are. The advanced rules, as Philip mentioned yesterday, for some of them, it could add more interest. It could be more intellectually stimulating. But really, the basic rules should support everything the game has. The advanced rules are just add-ons, actually. If you don't want to send us advanced rules, don't just cut it completely. Say, we're not going to use the advanced rules because we just could not get them to work in time. And that's absolutely, perfectly fine, so long as the basic rules still match the core experience you're trying to get across. PROFESSOR 1: I think one thing I do want to add to that is that for the most part, your basic rules are pretty darn good. Much better than a lot of the advanced rules. Maybe because the wording in the basic rules had a little bit more care. Maybe you tested it. We found the basic games very playable and pretty enjoyable, was probably just fine for what we were expecting from assignment one. So a lot of the stuff that you had in advanced rules, I do realize, some of those were things that came up in the testing process, and you thought, you know, this is a game that our game has evolved into. But once you realize that there is a core, basic game that works, and then that this sounds like a slightly more complicated version of the game, that slightly more complicated stuff is all candidates for cutting. Because you've already identified what the important part of this [INAUDIBLE]. PROFESSOR 2: When it comes to time and your deadlines, of course, adopt a common lingo. Adopt a common language. Stick with it. Proofread your rules. I think I have it written down in the very bottom. Proofread your rules. So I'll say that a third time later on. And that means, like, naming conventions for the pieces, for what you've chosen to call things in your game. That could be naming conventions based on the rules, but also proofreading your cards, too. Making sure all your pieces are using the same language. When it comes to talking about what the player does, we did notice some should versus may versus must, kind of getting mixed up. So when we're playing the game, we're not quite sure, well, if I should do this, if I may do this, why-- if it's may here, is it also may over here, or is it should over here? Just wasn't quite sure. Wasn't quite clear. So I definitely recommend spending today working on your rules. Spend today having us play your game using your current rules, and then taking our feedback and then working on it for turning it in on Wednesday. And again, when it comes to cutting things-- when you are cutting, making sure that you are filling in the gaps. So if you cut something out, there might be a system, there might be a rule-- in this case, it generally was polish issues. So if something was removed from one of the rule sets, we saw a little bit of-- PROFESSOR 1: Remnants. PROFESSOR 2: --remnants of it, spread in the rules. PROFESSOR 1: So for instance, say you are one of those groups that had basic rules and advanced rules. And you took a chunk of the game that you moved into advanced rules because you realized that the core-based game was more playable without that. But your basic rules still refer to bits of the advanced rules. And that gets really confusing. Because advanced rules are a completely different section of your rules that we haven't even gotten into yet. But we will. So do that cleanup work. PROFESSOR 2: For setup-- so if you're going to have a setup section before your rules-- it's actually highly recommended-- have a setup section that is split out that is different from your rules section, so we know exactly how to set up the game. Include diagrams and photos. Where do players sit? What is the orientation of cards? If you have a card that has all this information on it, what is that common information across all cards? If you think about a Magic the Gathering card, and all the different icons-- hopefully your cards aren't as complex-- but you'll have the same thing in the same place on all the cards, so you can then tell us what that thing is, and why it's useful, and how to use it as you're playing the game. And in particular, let your game teach us how to play, if possible. If you've got time. If not, it should still be in the rules. What I mean by let your game teach us is, if you're going to have a board, and the board has a lot of empty space on it, you can put some your rules into the board. You can put some of the setup features into the board, if you've got time right now. Proofread your rules. And turn in as many copies of your rules as you have players in your game. So, the assignment did say, it's a set number of two, three, or four players for your game. If it's a four-player game, give us four copies of your rules. That's all the basic stuff. Yeah. And then we'll be getting your feedback, specific feedback for each individual game, tomorrow. Yeah. Yeah. Tomorrow. I'm not sure what time tomorrow. But we'll get you that before you're actually turning in. That's it for me. PROFESSOR 1: I just want to clarify one thing about should, may, must. I'm not sure, because I'm trying to think of a more concrete example. So it's like, if in your rules, you say, the player should move his or her piece from one square to another, we're not quite sure whether you're describing this as a good strategy-- as something that the player should generally be doing because it's going to put the player ahead-- or something that the player must be doing because the rules require the player to do this thing. Should is a very, very difficult word to figure out for us. And if you're to begin-- AUDIENCE: Would you suggest not using the word should? PROFESSOR 1: It's possible. Sometimes context explains it for you. But it does muddy things more often than it helps. I find should makes perfect sense if you're going to give an example. If you're going to give an example, make it [INAUDIBLE]. Make it obvious. Or even, like-- PROFESSOR 2: Put it in a sidebar, or-- PROFESSOR 1: Right. Do indents, or a sidebar, or something, to make it clear that this is not actually part of the rules. This is an example of how this set of the rules plays out. And then you can use words like should. Right? You know, player one should do this. Because you're describing a strategy. You're not saying that the rules require this to happen. Must-- say must when it's must, if a player must do something. And when you say a player may, be very, very clear of what else the player may do. Say, for instance, it's one out of five different options. You say, the player may do one of these five options-- bullet point, bullet point, bullet point, bullet point. And the last bullet point is probably, like, do nothing. If a game allows something. PROFESSOR 2: Bullet points are really, really useful if there's any kind of option. Even if it's just, like, a player may do this or this, break up the or in its own line. Give it bold. Put a couple lines around it. Put the next thing that they can do-- the other thing they can do-- after it. Make it really, really clear it's one or the other, not both. PROFESSOR 1: Yeah. Yeah. Yeah, that goes the same for not just your printed rule sheet, but also cards, stuff on the board. Because sometimes there are games where you have options on the card, right? And you may do this or you may do that. It's not entirely obvious if, say, the card does both of those things, or you get to choose from one of those things, and who gets to choose one of those two things. So that's something to keep in mind. You only use numbered bullet points-- 1, 2, 3, 4-- if something's going to be sequential. It's good for things like, here are the steps of a round of game play-- first you do this, then you do this, then do-- do, do, do. Don't say the player may do one of the following-- 1, dot, 2, dot, 3, dot-- it's like, uh, now I'm not quite sure what's going on. OK? PROFESSOR 2: Yeah. Yeah, and actually, this brought up rounds, stages, phases, turns. PROFESSOR 1: That's the vocabulary thing again. Yeah. PROFESSOR 2: Choose one. Stick with it. Proofread it. Make sure you chose it. Make sure it works. Especially if you're going to make any changes today, and you're turning it in in two days, have somebody who is not you proofread it and make sure you're using the same language throughout. PROFESSOR 1: Rounds usually involves everybody around the table doing one thing in sequence, the same sort of thing, in sequence. Steps are really big. Phases-- it's usually describing large chunks of game. you know, things like opening [INAUDIBLE], mid-game, end-game phases, and usually not used in rules. But I know that some games-- PROFESSOR 2: Connect those games. PROFESSOR 1: Yeah. Use those. AUDIENCE: Sometimes they use other terms. PROFESSOR 1: I know This is why it's maddening. Because it gets used in both contexts. AUDIENCE: So what would you use to describe the individual pieces of a turn? PROFESSOR 1: Steps. Especially if it's a sequential sequence of things. Like, this is my turn, in a round, and here are the three steps I get to do. Step 1-- draw a card. Step 2-- choose a card from my hand. Step 3-- play it. Right? You know, things like that. That sort of works. Phases-- I've also heard it described in things like there is round one is one kind of action, and round two is like a completely different kind of action, and round three is [INAUDIBLE] like phases. And it was like, oh my god. That's-- [INAUDIBLE] that, actually. Yeah. AUDIENCE: In [INAUDIBLE], there are turns in phases within rounds in phases. PROFESSOR 1: Yes. This is why the game is actually so much harder to learn than it is actually play. To play is actually not that hard. But when they desc-- AUDIENCE: The phases don't matter, but the turns and everything else was actually really, really good. PROFESSOR 1: Yeah. So that's why phases complicate things. Sometimes it is, in fact, the right word, but it's just hard to be able to expect that every single player is going to interpret that word the same way. OK. PROFESSOR 2: So you can break up into your teams, continue working on your games. This is playtest time. So, in particular, playtest your rules. If you want us to play your rules and give you feedback before you turn it in, today's the day to do it. PROFESSOR 1: And attendance sheet-- PROFESSOR 2: Attendance sheet's right there. PROFESSOR 1: Attendance sheet's over there. [SIDE CONVERSATIONS]
|
MIT_CMS608_Game_Design_Spring_2014
|
31_Prototyping.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: All right, so before we go into the stuff that I'm going to cover for the readings today, I want to get a sense of who's already found a team. How many of you are already in a team? Half the class, OK. And the rest of you, what we're going to do is try to solve that problem right now and get you in a team. For the people who are in teams, can you say what mechanic they're working with, and how many people you have in your team? AUDIENCE: What if we have two mechanics, so we're not sure of that yet? PROFESSOR: That's fine. Just talk about both of them, and then you'll figure it out, probably today. AUDIENCE: OK. PROFESSOR: So what are they? AUDIENCE: We were thinking between either like pathfinding or some kind of resource management. PROFESSOR: OK, pathfinding-- AUDIENCE: I'm sorry, path-building. PROFESSOR: Path-building and resource management. AUDIENCE: And we have three people. OK, so three right now, PROFESSOR: OK. What else? AUDIENCE: Hidden information, and we're four people. PROFESSOR: You have four. Hidden information, so that's four. AUDIENCE: Stealing or like team sharing. PROFESSOR: I'm sorry, stealing, or? AUDIENCE: Or like teammate sharing. PROFESSOR: Team mate--? AUDIENCE: Sharing. Sharing your teammate. PROFESSOR: Oh, OK. So stealing or team sharing. I think they say team sharing. How many people? Four. Is that it? OK, and how many people are looking for teams? OK, one, two, three-- is your hand up? AUDIENCE: No. PROFESSOR: Ok, one,two, three, four, five, six people. All right. So we could make two complete teams out of the remaining six. I think that's right, did I count that right? So that's what I would suggest is that all the people who don't have teams-- we try to make two teams out of that, rather than try to join this team. Because otherwise, we end up with a two-person team somewhere, and that's not good. All right, so of the people who aren't on a team, we went through brainstorming on Monday as a whole class-- I can bring up the list again, but was there something that you remember from Monday that you felt like you would like to work on? AUDIENCE: Either voting or bankruptcy. Or basically trying to bankrupt people. PROFESSOR: Bankruptcy. AUDIENCE: Your numbers-- there's 16 people. You went ahead and said there's four 4-teams and six unassigned. PROFESSOR: There are six unassigned right? AUDIENCE: But there's-- you counted a total of four, four, three. Someone's being counted on two teams. [INAUDIBLE] When they come. They're not here right now. PROFESSOR: The remaining people who are not here, if we have a bunch of three-person teams, it's a lot easier for extra people to jump one. And three-person teams are pretty easy to work with. Four-person teams are also. We'll be working, but it'll be a lot easier to schedule a meeting with three of you. So three-person teams are good. Two is pushing it because if someone gets sick, you're in trouble. Yeah? AUDIENCE: Back to mechanics, I was interested in trading, or maybe building, or expansion. PROFESSOR: Trading, building... Did you say expansion? AUDIENCE: Expansion. Territorial expansion. PROFESSOR: OK, so by expansion, we mean territorial expansion. OK. Yeah, we see these three things in one game a lot, right? But for this assignment, let's see where we can deal with one. AUDIENCE: I'm interested in building or area control. I would be interested in deception or what the unfairness means. PROFESSOR: What? AUDIENCE: Unfairness. PROFESSOR: OK. So All right, so here are the different kinds of game mechanics that the people who aren't assigned are currently-- at least one person is interested in them. So what I'm going to do is I'm going to go through each one of them, and for the people who aren't assigned, put up your hand if you think that you might be willing to work on a team on that concept. OK? So, starting with voting, we have-- OK so three, bankruptcy-- one, trading-- three, building-- four, expansion-- it's territorial expansion again-- four, back building-- one, area control-- four, area control and expansion might actually end up beating each other then. Deception-- one, two, three, four, and unfairness-- one, OK. So we've got a bunch of things-- I thought there was a lot of interest in area control. I think I'm going to leave that off because we will be revisiting the topic later in the semester. So you'll get a chance to look at these. So I'm going to take these two out for now. I'm gonna take out all the one word ones that will make it tricky. That leaves us with voting, trading, building, and deception. Of the people who aren't assigned, was there one that you're not interested in? Any one that you're not interested in working with any of these. AUDIENCE: I don't particularly like the [INAUDIBLE].. AUDIENCE: No, he's saying are you not interested in all four. PROFESSOR: In one, two, three, or four. If you're interested in one of these, I think we can make teams out of this. All right, so later on in the class, when we start the prototyping, there's going to the time to actually talk with each other. What I'm going to actually suggest is all the people who are not on teams switch with the front row, and all people who have teams, switch with back row. Probably, one corner will also have to be a team that already exists. So yeah, actually, let's do that now. So if you're not on a team, switch with front row so we can all have discussion. [SIDE CONVERSATION] PROFESSOR: You might as well sit with your teams if you already have one. Because we're going to talk about something new together today. [SIDE CONVERSATION] PROFESSOR: All right, for the people who aren't in teams, remember the goal is trying to make two teams out of this, of any combination. If you end up changing the game mechanic, that's fine. All I'm looking for is two teams, out of the six people who aren't assigned any. [SIDE CONVERSATION] PROFESSOR: Who hasn't signed in yet? Who hasn't signed in yet? AUDIENCE: I haven't. My name isn't on it. PROFESSOR: Write your name in. AUDIENCE: I was pre-registered. PROFESSOR: You're pre-registered? AUDIENCE: Yeah, and I was there before. My name is on the list the first time. Now my name's not on the list. PROFESSOR: Very weird, something weird. I'll take that straight outside, so write your name down. You did awesome, and I'll check it out during the break and see where to fix that problem. All right, so how many of you have seen this presentation from me before, in any of the 15 times I gave it in the past year? OK, all right, so about four people. This is a presentation that I give a lot. It's also something that ends up getting covered in one of our new classes this semester CMS301. This is probably gonna be the last time I actually give this presentation in CMS608 because it's kind of like a really basic skill, we're gonna be moving this into our intro classes in the future. But it's also the one skill that, if you learn nothing else from the rest of semester, but you still didn't come to class, please learn this. Because this is the core skill that we're going to be asking you to keep working and keep practicing and keep improving on, all semester long. This is the thing that you're going to be doing all semester long. So first of all, that's kind of jumping the gun. Let me take a step back. What's a prototype? It's in the reading. AUDIENCE: Like a basic thing you just toss together to illustrate a concept. PROFESSOR: To illustrate a concept, yep. AUDIENCE: Isn't that supposed to be one mechanic and reiterate that one. PROFESSOR: For game, it could be something that just tests one mechanic or concept, and then, you have to reiterate on it, sure. AUDIENCE: Something you just put out there to get feedback. PROFESSOR: Yeah, to just sort of like gauge how other people are going to respond to it. AUDIENCE: In general, It's just an unfinished version of the game. PROFESSOR: It is unfinished. It is not like your shipping product. At no point it should seem to be a shipping product. AUDIENCE: I'd say it's that first version of anything, that was specifically built just to test that thing, rather than to have a product. PROFESSOR: Again, just to test an idea, to test whether something could work, and, sir, you said first version could be an early generation of something. AUDIENCE: I was gonna say like a minimum usable product. PROFESSOR: Something that you can actually use, not like a sketch of a game. An actual game that you can actually play. Anything else? I thought I saw a hand back there. I think we are getting kind of a good sense of what a prototype is. You probably encountered prototyping in some other classes-- a lot of engineering classes involve prototyping. It is this unfinished thing. It is not meant to be an iteration of something that you're actually going to ship. Now, in this class, a lot of things that you're going to end up building are building towards a finished class assignment-- the thing that you hand in that meets all of the criteria, that has all the rules, and if you put in front of someone who's never seen your game before, they should be able to figure it out. However, the very first assignment, it's pretty build a prototype. You're testing out one single mechanic. And the question that you should be asking yourself is, what are all the different things I can do with this one mechanic? And then, you can just deep dive into this one big open question. And you're gonna end up choosing that question for yourself, but as an investigation. And prototyping is a tool to help you investigate something, that is going to help you build your final game in the end. So think of assignment 1 as an exercise that's actually going to help you build assignments 2 an assignment 3. Even though we are asking for things like rules that we can read, but for the most part, this can be very sketchy, very unpolished. If we can play it, if we can use it, I think someone said minimum usable product-- to be able to see the ideas that you're working with, that's good enough. Do spell check your work, that would be nice. But some of the reasons on why you want a prototype-- where are my slides notes? Just one second-- I have no slide notes. OK. So some of the reasons for why you want a prototype-- we already talked about getting feedback-- being able to put it in front of people who may not necessarily have seen your game before, but may be part of your target audience-- to be able to get their opinion. Put it in front of instructors who may never have seen other version of the game before or maybe have seen the game, to be able to get their feedback. Or other designers or guests that we might be bringing in-- you want to be able to put something in front of everyone who knows something about games to be able to get their critique, right? But the earlier and cheaper part is the important part. You want to be trying to get feedback, as quickly as possible, in your game development process. The earlier you manage to get new information into whether your ideas are working out or even appealing, then the cheaper it is to make changes. So for instance, you've got a great idea for one of the game mechanics, and it turns out that everyone else on the team absolutely hates it. But you'd rather find out about that on the first day of meeting up with your team, than, say, two weeks into the project. That would be very useful information to sort of say, "OK, I can work with other ideas." Are you on a team? You are a team. Awesome, OK. So I don't have to worry about getting you on a team. The other thing you are trying to do with a prototype is to try a lot of different approaches to the same problem, to experiment a lot of alternative solutions. One thing that often happens in design teams, professional design teams, amateur design teams in school or outside-- the people like to talk thier ideas out. They love to make sketches, they love to theorize about-- well, it worked in this game, so it should work in our game-- I really like that, or i really hate that in this other game. This offends me on some sort of primal way-- a game designer could explain it to you verbally in about an hour or so. And that just wastes a lot of time, especially in a class like this when you have these very tight deadlines to be able to get something working. You don't really want to be spending a whole lot of time talking things out. You want to get actual answers as quickly as possible. And one thing nice about prototypes, especially on the multi-person teams is that can make multiple prototypes. It just tests a whole bunch of different ideas out. If solution a is better than solution b, well, instead of arguing about-- well, theoretically solution a is better than solution b, why not just prototype both of them and just play them. You'll get an answer very, very quickly among you, and all of your teammates can see. And it makes it [? special ?] and actually more fruitful because you're working with evidence, as opposed to working with just theoretical notes. Another thing about prototypes is that, again, prototypes are not your shipping product, which means you should always be willing to throw it away. The less time you spend making a prototype, the easier it is to just discard it. With looks, the uglier and sketchier it is, the easier it is to abandon it. And you need to be able to abandon prototypes. You need to be able to say this just isn't working, and I'm fine with that-- the whole team is fine with that. You spent 30 minutes taking this thing, we can afford to lose those 30 minutes. Which means you need to be making stuff either really, really fast and really, really shoddily. You don't want to be spending a lot of time making a polished prototype. Here are goals that I am setting up for you when you're going into prototyping-- today we're actually going to start prototyping the games that you're going to eventually hand in for Assignment 1. I want you to find the fun. All of these game mechanics, the mechanics that the teams have already chosen, and the mechanics that the teams will end up choosing-- all of them have fun and un-fun implementations. I can think of a whole bunch of ways to make building go at a really plodding pace, to make it so that you can never really get any progress, and things like that. And you can come up with really, really nice implementations that are just going to engage everyone around the table-- they're gonna have a good time. Fun doesn't necessarily mean everyone's happy. It's like a game where, deception for instance-- it's like you're playing around with that mechanic, and you feel like you're just being an asshole to other people. But that's what the game's actually about. You are trying to be deceptive to other people-- maybe not necessarily without them realizing it. But then, [INAUDIBLE] experience, that engages you and sort of puts you in the persona of what the designers were trying to achieve. Then, that's engaging. That's good. If it's something that puts people off from ever playing the game again, then, you might want to re-evaluate that. But there is some value in games that you're only gonna play once. I'm not going to be very dogmatic about that. What I want you to do throughout the prototyping process is figure out, of those game mechanics, what are the fun and engaging things that you can do, and what are some of the less fun implementations? If you don't find anything that's not working, then, I don't think you're looking hard enough. You need to be looking really hard to-- What you should be finding is whole bunch of things that don't work, and a few little gems that do. The other thing that I want you to do with the prototype is use the prototypes to communicate to the rest of your team about where your ideas are going. As you work in your team, if you want other people on your team to understand the ideas that you have in your head, try making a prototype to communicate that. This is this thing that I did last night, I bring it into the team meeting-- together let's play this for like five minutes. And then, you'll understand what I think is interesting about voting, for instance. And your team may have a completely different idea about what they heard, about where they wanted the game to go, but this is a very effective communication tool. When you're actually designing games, especially for assignment 2, and assignment 3, we are trying to hit some sort of desire, external aesthetic. The final assignment is going to be for our client's needs-- then, you are working on some sort of external spec. But then, you also need to communicate within your own team on how you're interpreting that spec. That requirement, that request, from external party in this class will be your instructors. Now, if I say make a game that is going to get you to a certain aesthetic, which is assignment 2, then how do you think the team should start even proceeding in that direction? Communicate that using your prototypes. It can be very, very difficult to get those ideas out otherwise. The third thing that I want you to do with your prototypes is to take it outside of your team. Today is going to be very easy because we have a room full of people who are hopefully eager to help each other out. And they're going to end up playing each other's prototypes by the end of class. That's only gonna last for one class because, as of the end of today's class, all of you are going to know too much about each other's projects to actually be good testers in the future. So I want you to take it to people outside of class-- your dormmates, your friends and family. Email them stuff if they're at home-- take it to the libraries or the student center or something. Offer people free tacos or something to play your game for five minutes, that sort of thing. It's very, very important to make sure that you're getting feedback from people who are not on your team. Of course, the instructors-- it's not just gonna be us. We're gonna grab people from the game lab to come in here and play your games and give you feedback of them. So occasionally, we'll just bring in new people who haven't seen your game before, but don't count on us doing that. You should be doing that as part of your homework. That is the process of prototyping. OK, before I go down that shopping list, any questions so far about what you're trying to achieve with prototyping? OK. Again, Assignment 1 is going to be a lot more about prototyping than making the full game. Investigating one single game mechanic is something that you're probably actually gonna end up doing for both Assignment 2 and Assignment 3. Because it's like, hey, this is the way how I think we can fulfill the requirements of the assignment, and then, you're gonna investigate several different mechanics to get to building your final game. So think of Assignment 1 as the prototyping assignment. Let's talk about what we've got for you today-- things that we recommend for prototyping include large sheets of paper. Here are a couple of preprinted maps-- there's a hex grid on one side. There's a regular horizontal and vertical grid-- is there a word for it? Cartesian?-- square grid. These are, I think, two centimeter squares, which is also the size of some of the wooden blocks that we've got. I think the hexagons are also two centimeters if you to take them horizontally. So you can print these. These are all actually available in PDF on our class website-- the prototyping maps. There's a second version of these maps that has marks. It has just a bunch of lines in bold. It's exactly the same map, but it has a few things boldfaced to help figure out what a square in here actually looks like, if you want to use a counting track, or something like that. I like these better. I know Rick likes the other one better, so we put both up for you to download. Let's see, what else do you need? Dice. There's a big box of dice, 6-sided dice, 12-sided dice. Has anyone ever used a 12-sided dice? Really? What are they good for? AUDIENCE: D&D. PROFESSOR: D&D does d12s? AUDIENCE: Yeah. PROFESSOR: OK. I know they do that d20s a lot. But, OK. 10-sided dice, 8-sided dice-- If you ever buy dice for yourself, a tip is to get them from kindergarten suppliers, rather than from gaming stores. Gaming stores will probably cost about 10 times as much. This box cost me about $12, plus the box. You get a nice little case. Dice are good for randomizing, obviously. I do not suggest using dice to keep track of numbers. So say you've got a number that increments anywhere between 0 to 20-- sorry, 0 to 19, or 1 to 20. Don't use a 20-sided die to keep track of the thing because it's very easy to lose that stat just by a flick off your hand. Just grab a piece of loose newspaper or something, and just write down that number-- if you need to keep track of stats. Don't use dice to keep track of stats. But dice are good for randomizing, they can be used sometimes as tokens that move around in a pinch. So you can use dice to do things, like one die is the tens, and one die is the ones. So I rolled two 10-sided dice, and then, it's going to give me a number between 1 and 99-- no, 00 and 99. Yeah. Index cards-- we've got white ones in here, we've got colored ones in here. They sell index cards that are a little bit closer to playing cards size, but remember the rule of keeping things big-- big sheets of paper. I'll go in a little bit more detail of why you want to do that. But if you're making like a card game, I would actually suggest using index cards. They are a little bit harder to hold in your hand. It's difficult to keep a whole bunch of index cards in your hand, like a fan. AUDIENCE: They're hard to shuffle too after a while. PROFESSOR: Yeah, they're hard to shuffle as well. But for prototyping, not only are they cheap-- we talked a little bit about doing things cheaply and very disposable. Those are larger because it's easier for you to actually see when you're prototyping. And I'll get into some value of why you want to-- a few more reasons of why you want to be keeping things as big as possible. Post-it notes are not in here. They are in those boxes. Post-it glue and notepads-- by notepads, I mean stuff like this. Gets people to keep track of stats. We've got pencils, and we've got markers and pens in there, as well. Post-it glue- if you haven't see it, I'm pretty sure it's in one of the boxes-- it looks just like a regular glue stick, but it's blue. I might only have some in my office. Basically, it takes any piece of paper and turns into a Post-it. It makes it into sort of a restickable piece. And not only is that useful for the brainstorming phase, where you can take index cards and turn them into Post-its and stick them up in a wall, they're really handy for prototyping because you can lay things out, say on a desk or on a sheet of paper. And it won't just go flying if somebody sneezes. You can sort of keep things in place. If you've never done prototyping for, say a user interface on a piece of computer software, it can also be really, really handy. Because you can cut out pieces of paper that are exactly the size of your menu bar or your window, or whatever. Just use the glue stick, make them replaceable and sticky, and you can place them anywhere on another sheet of paper. Pencils, pens, markers, scissors, tape-- that should be obvious why you want all of those things. Gamebits-- some of you saw some of these in last week's games. These are sort of stackable counters. We have the cubes. You can also dice for gamebits. You can also use pieces from other games. You don't necessarily have to restrict your ideas to what we give you. Those, I believe, are rubber animals-- often end up being used in tactical combat games, for some reason. I don't know why people always want to pigs to fight chickens. Ducks hate pigs, right. Ducks don't stand up, that's the problem. That is the problem with our-- we have a whole bunch of little rubberized animals, and they don't stand up very well. They do tend to tip over. We also have a whole bunch of rubberized vehicles, and those tend to be a little bit more stable. So keep that in mind before you decide, oh, I have to use this chicken piece. Kindergarten counters, things I use to teach kids how to count, make great, great covers for your design. AUDIENCE: I think we [INAUDIBLE] or two. PROFESSOR: Oh, you mean, like the ideas right there-- different colors on your side. So you can use them for currency in your game, for keeping track of points-- another way to keep track of a status-- just give people pieces that help them keep count. Your phone camera is extremely useful in keeping an archive of your work, of keeping track of your game and play, keeping track of who has what hand at any given time. It's really, really easy, and a lot of phones now have a resolution, that you can sort of reliably use them to keep a record of your work-- way easier than trying to start everything on the photocopy machine. So I used to recommend using a photocopier, but now, just take lots of shots with your phone camera while you are working. So you want to be keeping your prototypes rough. You want to be using hand-drawn materials, trying not to immediately go to opening up a Google doc and creating a spreadsheet, or anything like that. Start writing things down. Again, you want to make a bunch of cards-- just like writing stuff down on index cards. So if you want to start making a map, just start using a marker and drawing it on the grid. You want to keep it sketchy, and you want to keep it large. And you don't want to be using too many inks. Just use one dark ink, and run with that. The reason for this is because you don't want people to be giving you feedback on how your game looks. If they do say something, wow, this looks like crap, that's fine. And just move on from there because that's not the feedback you were looking for in the first place. If you start using lots of different colors, everyone will start talking about your color scheme-- maybe there should be red, maybe there should be green. If you start making things, say printed out from a laser printer or something like that, people are gonna ask, wow, is this final artwork? It looks not very good. Or if you take the trouble of using colored pencils, for instance, to nicely render an image on your cards. And people say, wow, this looks great. You hand-drew that, but then, now, if you wanted to alter it, it means you're going to have to go through all that effort again to draw a new picture. And that takes time. That takes more time than you need for this thing. You want to keep things sketchy to sort of convey to your testers that this is a work in progress. If somebody sees something that actually looks very, very nice, they are going to think that you're close to final. And they are going to be a lot more hesitant in giving you drastic feedback. Things are going to, sort of, require drastic changes. But if it looks like you've spent half an hour, maybe 15 minutes, just sketching stuff on bunch of cards, you'll get feedback from testers-- stuff like, I just don't like any of this, or like, maybe this is the one thing I like, but everything else is just crap. But that's the kind of feedback that you want to get at the prototyping phase. And keeping things sketchy can sort of encourage people to give you that kind of feedback. I definitely have lecture notes, but they're not showing up on my screen. And so, I'm a little bit off right now. Here we go, OK. So the other thing is-- this is actually a 608 class from way back. The other thing that I want you to do is keep iterating over and over and over again. Yeah? AUDIENCE: So you said like, earlier on, we can't-- after this class, we probably won't be able to playtest each other becuase we'll know too much about the games. Does that also mean we should be changing our prototyping outside this class? Find a new group everytime we revise the product? PROFESSOR: Absolutely. The next time we do a prototype-- a playtest in class, I'm going to specifically say if you try to find a game where you don't know anything about that prototype before you start playing the game. That's where the feedback is going to be the most useful. You don't want people to come in with preconceived notions based on people's prototypes because your prototype may have changed completely from the last time that they saw it. But then, they're going to come in thinking that your game is like some sort of natural progression from that previous idea. And they may respect your ideas of what kind of strategies are used. If they already understood the rules-- what you verbally explained to them on day one, then they can't give you any useful feedback on how well your rules were written because they already know the rules. So when they read your poorly written rules, they can't tell you it's poorly written because they already understand it-- that sort of thing. So yes, always try to find new testers. So the purpose of iteration is to just repeat this over and over again. You start with a question that you're trying to answer. The broad question of Assignment 1 is what are all the different things that you can do with this mechanic? But say, I know someone suggested auctions on Monday-- what can you do with a Dutch auction? How does a Dutch auction actually work? --that sort of thing. You want a question that, not only is a clear thing that you can actually test, but also you can set criteria for what will be a successful test or an unsuccessful test. The question might be very specific, like this game is too long. Can we get this to run under 20 minutes? Can we get this thing to run under 15 minutes? Well, that's a falsifiable question, right? If the game play actually took more than 15 minutes, then it was a failed experiment. And if it took less than 15 minutes, then it worked. And you want to be able to go into the process asking, what is the thing that we're trying to solve? --before you start thinking of potential solutions. How many of you have heard of axiomatic design? It comes from McKee, I think. The theory behind axiomatic design is that, you have a couple of-- for any potential solution to a problem, it needs to fit a certain set of criteria. So you come up with a bunch of axioms, which you just take for true. So certain axioms that you might come up with a game would be-- this game can't take more than five minutes, or this mechanic can't take a player more than 10 seconds. So that axiom could be something like, we don't want the player to have more than five cards in their hand. These are things that aren't necessarily always the right answer, but you're going to, sort of, set these criteria for yourself for a given test. And then, you start thinking about all the possible solutions to get you there. And if one of those solutions meets all of your criteria, or all of your axioms, then that's a successful test. If not, then it's an unsuccessful test. So a question in your game may be like, can a player execute this mechanic in more than one way? Or will a player execute a certain given mechanic in more than one way? We've given them three different ways to do it, but if they keep doing the same thing over, and over, and over again, then that will be an unsuccessful test. OK? Once you've got a question, you can start designing for that, right? Maybe the designing involves something very small, like I'm just gonna tweak numbers off rules that we've already written. It might be, we've got to rewrite half of our rules, or we've got to throw out this rule. Or maybe, we're gonna rearrange the order in which these rules are going to be executed. That's all design. The trick is to do it fast. The word rapid is there for a reason. If you are spending a lot of time discussing about what the right answer is, just start designing two prototypes-- or more prototypes to sort of test out all the outcomes. Anything that takes a long time to kind of get bumped down in discussion-- it's actually wasting time for your team, when you should be prototyping. Because they're gonna learn a lot of things about your game on the side, besides the question that you're asking. If you have a discussion, you'll probably only-- if you do stumble across the correct answer, you're only gonna get the answer to that one question that you asked. Make more prototypes to answer your questions, rather than try to talk them out. Then, you do a playtest. And that's the second part of my presentation, which involves the playtesting phase. But basically, you've grab a bunch of people who don't know how this game is gonna play out. First, you will probably end up playtesting within your own team, just to make sure that everything makes sense. And then you take it out to somebody outside, maybe someone else in the classroom, to see how they respond to it. And then you look at the results of that playtest-- did it address the problem? Did it give us any information towards the question we were asking? Maybe it was inconclusive-- you need to do another playtest. Maybe it indicated that we were in the right direction, but the changes that we made weren't drastic enough, or maybe were too drastic and [INAUDIBLE] down. That's when you make your revision, and then you repeat the whole process again. You can improve the quality of your question, be more specific. You might stick with the same question and just do a second version of design to it. You just want to be repeating this over and over again. The more times you get to do this, the more refined your prototype is, and the more refined your final games are gonna be. This is the same process, whether you're making a prototype or whether you're making a full-blown board game or card game or computer game. The more chances you get to iterate on something, the more refined it's going to be. You do want to keep changing. Here are a couple of tips that-- actually, I will get back to this later after you've actually had a chance to prototype once. Those are like tips for how to get out of rut. So let me talk about this instead-- keep track of all of your rules. Write your rules. You can write your rules on cards, which makes them very easy to rearrange, to discard, to say-- all right, we're not playing with this rule right now. But then, maybe you can reintroduce it later. So you can use the index cards for that. You can rearrange them to rearrange how they end up getting played. If you change a rule, update your card. It is something like, I'm going to just change a number. You can just do it right on the card-- if you're actually changing the way how a rule works, write it out on a new card. Take photos with your cameras, and try to simplify your rules to the point where you end up with like a minimum set, in order to make a certain prototype playable. If you have too many rules operating at once, it can be sometimes very, very-- really, really confusing to figure out where everything is going wrong. It's a lot easier to add new rules than to take them out, which is why I place the emphasis on taking stuff out. Because if I remind you to take it out, maybe you will do it once in a while. So that's going to be the process of prototyping. We're going to start handing out all of these materials. People who haven't figured out your teams yet should be having a discussion on what mechanics you guys want to work on, and how are you going to split up your teams. People who know what mechanics you're working on, or maybe are trying to decide between two mechanics, you can start splitting your team into two and working on two separate prototypes, for instance. And the goal is to have something that somebody outside your team can actually play by the end of class, more accurately, by 3 o'clock. Because we are going to go into playtesting at 3 o'clock. And around 2:30, I'll go back to this slide-- to give you some ideas on how else you can change your designs to be able to help you get closer to your design goals. But right now, this is what I want you to do, so we're going to start handing out some of this material. AUDIENCE: Do you want another table outside [INAUDIBLE]?? PROFESSOR: I'll be OK with the team moving up there. [INTERPOSING VOICES]
|
MIT_CMS608_Game_Design_Spring_2014
|
22_Brainstorming.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: But this is going to be used to set some of the ground rules on brainstorming-- so that we can actually talk a little bit about what projects you want to be doing for assignment one. Just a reminder, assignment one is all about picking one mechanic and going as deep as you can. And that doesn't just mean using the same mechanic over and over again. This is exploring all the different ways that you can use a single game mechanic. And that game mechanic will be whatever you want to choose. So a lot of brainstorming is going to be around that. But you can do brainstorming around themes like-- I wanted to make a game about-- there were games about time travel and snowball fights and CPs-- Campus Police. So you can do that. But, if you were to brainstorm with that, I would suggest try to develop that into, what is it about that theme that you're really interested in? Is it about the stealth? Is it about consequences? And try to bring the conversation back to game mechanics. And for those of you who can't remember the working definition I gave you on Wednesday, I'm describing game mechanic as something a player does to change game state, right? So this guy Alex Osborn-- not the high tempered in the image there-- he was working in advertising, and he actually sort of coined the word-- I think he coined the word-- brainstorming. And wrote it up in "Applied Imagination" as a practice to just get a lot of ideas on the table, sort of without any regard to quality but quantity. And the reasoning behind that is that if you can just get a ton of ideas, then there's going to be some good ideas in there. And you can go through a process later on-- usually for us it's going to be during the team formation process-- where you decide whether it's the actual idea that you want to work with. But how to get enough ideas on the table so that you-- there's a couple of gems in there for you to find. And that's what this exercise is going to be all about. A couple of things I want you to remember while we do this. One, you don't criticize any of the ideas that have come up. And I do want to say anything built on that idea. Suggest something that-- if you can't think of a better way to say that idea or a better spin to that just throw that out as a new suggestion. You don't criticize the stuff that's been out there because you don't want to intimidate anybody from sharing their ideas. The worst thing that you can do is, somebody says, I want to make a game about blah. And then he says, that's stupid. Wow, that was kind of dumb. I can't believe you said that. Things like that. And then it's like that person may have even better ideas along the way but now would be hesitant to share that with the class. And one of those ideas might be-- might end up having been the idea that you would have loved to be on the team on. So no criticism. If you want to judge the ideas on whether it is good or bad, you do it at the end of the brainstorming session, once you've already got all the ideas on the table. Then you can just decide for yourself which ideas you want to work with. We want to go kind of freewheeling. It's OK for you to come up with ideas that are impractical, that are too ambitious, that maybe don't even really fit the criteria of this project. That's fine. The idea is to get the ideas out, and other people are going to come up with ideas that are appropriate. This is not the time for us to self-censor and decide is this an idea that fits or not. Just throw it out. I will write things down. I'll keep things on the computer screen so that you'll be able to see it. Then you can winnow later. Again, we're trying to go for quantity. This is a direct quote from Osborn, we're just trying to get as many ideas as possible. So try to keep it fast. Give me a little time to write things down on screen-- to type out things. But otherwise just keep throwing out ideas. Bad ideas are great because it adds to the quantity and it gets-- it stops you obsessing about the bad idea and then you can move on to other ideas. Build on ideas. Try to combine stuff that you're already seeing, that other people have already suggested. You can combine them to use something that's pretty awesome. Lego and Star Wars is pretty awesome. And then Lego plus Star Wars is pretty darned awesome. Or Marvel and Capcom, right? Marvel vs. Capcom, right? And that sort of thing. All right, so I'm going to talk a little bit about just the principles of doing this because this is the first time I'm going to be a facilitator. But in the future, you're going to be doing this on your own. And once you've formed your teams you may go through another second brainstorming phase. So you want to keep it kind of relaxed. And this is-- if it helps I'll probably turn off the recording, Start off by throwing out the worst ideas that you can think of. Just start suggesting them, and I'll write them. Don't interrupt anyone. I'm going to be the facilitator and secretary, so I'm going to be the person who's going to be writing up. Actually if you can help me with the facilitate thing. Right. No, I'll do the typing. If you notice there are people who are hesitant to talk or trying to get a word in edgewise, help me identify them so that they can say what they need to say. And I'll talk a little bit more about the facilitator role. I've explained a little bit about the process. Actually I'm talking process right now. And then the principal question is what is the one game mechanic that a team could work on for assignment one. what is the one game mechanic that you want to work. What's the one game mechanic you would like somebody else to work on. What is the one game mechanic nobody should be working on. Throw it out here, OK. And I'm going to write down everything. So you're going to have to do this on your own, in your own teams as well. So it's important for us to define-- if you are doing a brainstorming session by yourself, it's important to define what is the aim of the session. Is the aim of the session to get so many mechanics that you'll be able to identify at least one that you will be able to start your project on. So if you're creating a brainstorming session for yourself, make sure that you are defining that problem ahead of time. If it's too complicated-- say you're trying to brainstorm a solution to a design problem, or something like that, and that problem has a lot of interlocking parts-- you may need to divide it up into separate brainstorming sessions. So articulate them as simply as possible so that everybody knows what they're brainstorming towards. So far I've been explaining the problem and aim of the brainstorm session. The problem is that you don't have a project team yet. And Rick is going to help the ideation and is going to discourage criticism if any of you automatically go, ehhh, that was dumb. Just stop snarking or he's going to call you out. And he's also going to identify if people are having trouble getting a word in edgewise. Both of us will try to encourage some combination of the ideas. So we may throw in ideas which are just combining things that we see on the board. After this I'm going be playing the secretary, and my job is just record every idea. And just keep an eye on the time. Probably we'll run this probably for about 20 minutes. And then after that, the rest of the class will be yours to be able to discuss among ourselves and identify what projects we want to work on. I'll try to participate in the brainstorming. Again to try to combine, I guess. But if you're keeping it fast and flowing, then I don't even need to do much more than just write. RICK: When you're brainstorming on your own, that's really, really hard to make sure you're rotating the roles of secretary. PROFESSOR: Yeah, you might want to break your brainstorming session into like two 10 minute chunks or something and get a different person to be secretary. And that just makes it easier for the person who is secretary to then contribute later. OK any questions? OK, all right, let me bring up a little text for this session again. Sorry about the family photos. Is this too dark? Actually let me just do a text edit. What is TextEdit doing? Probably too big but, uh-- Oh, jeez. Just one sec. Is this still chopped up? Yeah. It's still chopped up, OK. It's very strange. But this should work. And I'm going to-- I can do columns in this thing, right? Can I? I can do tables? OK, all right I don't think I can do tables. Oh, wait, tables. Perfect. OK, all right, I am just going to type what you say. What kind of game mechanics do people want to work on? Go for it. AUDIENCE: Unfairness. AUDIENCE: Building. AUDIENCE: Path building. PROFESSOR: What? AUDIENCE: Path building. PROFESSOR: Path building. AUDIENCE: Auction. PROFESSOR: Auctions. AUDIENCE: Resource aquisitions. AUDIENCE: Discarding. PROFESSOR: Oops. Oh, darn it, it doesn't just add new columns, OK. AUDIENCE: [INAUDIBLE] one row. AUDIENCE: And then [INAUDIBLE] PROFESSOR: All right, sorry about that. AUDIENCE: Trading. PROFESSOR: Trading. AUDIENCE: Stealth. AUDIENCE: Attacking. PROFESSOR: Sorry, one at a time. AUDIENCE: Attacking. PROFESSOR: I heard stealth. AUDIENCE: Attacking. PROFESSOR: Attacking. AUDIENCE: Destroying. PROFESSOR: Destroying. Did I hear one from here that's not on the board? I thought I heard-- AUDIENCE: Manipulating. AUDIENCE: Time. AUDIENCE: Pushing. PROFESSOR: Pushing. AUDIENCE: Manipulating time. PROFESSOR: What? AUDIENCE: Is that true though? AUDIENCE: Yeah. AUDIENCE: Cheating. PROFESSOR: Cheating. AUDIENCE: Rules, changing rules. PROFESSOR: Changing rules. AUDIENCE: Lying. AUDIENCE: Inquisition. PROFESSOR: Sorry, one at a time. AUDIENCE: Guessing. PROFESSOR: Guessing. AUDIENCE: Prayer. PROFESSOR: Prayer? AUDIENCE: Yeah, why not? PROFESSOR: OK. AUDIENCE: Bombs. PROFESSOR: Bombs. AUDIENCE SINGS: Hallelujah. PROFESSOR: Prayer bombs. [LAUGHING] AUDIENCE: Where's Armageddon, bro? AUDIENCE: Manipulation of player's emotions. PROFESSOR: Manipulation of emotions. AUDIENCE: Gestures. PROFESSOR: Gestures. AUDIENCE: How about drawing? PROFESSOR: Drawing. AUDIENCE: Writing. PROFESSOR: Writing. AUDIENCE: Hidden goals. AUDIENCE: Erasing. PROFESSOR: Hidden goals. What was that? AUDIENCE: Erasing. PROFESSOR: Erasing. AUDIENCE: Unexpectedly ending the game. [LAUGHING] AUDIENCE: Accidental victories. AUDIENCE: Asymmetry. AUDIENCE: Symmetry. [LAUGHING] AUDIENCE: Uncomfortable situations. AUDIENCE: Puzzling. When a puzzling situation is puzzling. PROFESSOR: Puzzling situation. AUDIENCE: Designated winner and loser. PROFESSOR: What was that about winner and loser again? AUDIENCE: Designated winner and loser. AUDIENCE: Cooperative. AUDIENCE: Voting. AUDIENCE: Everyone loses. [LAUGHING] AUDIENCE: Everyone wins. [LAUGHING] AUDIENCE: Defector. AUDIENCE: [INAUDIBLE] PROFESSOR: What was that? AUDIENCE: Defector. PROFESSOR: Defector. And what's that one in the back I missed? AUDIENCE: Hidden agenda. AUDIENCE: Well, that didn't go as planned. AUDIENCE: No one knows what happened. [LAUGHING] AUDIENCE: Abstract representation. PROFESSOR: What was that? AUDIENCE: Abstract representation. AUDIENCE: You in the back. AUDIENCE: Oh, alliances. AUDIENCE: Story telling. AUDIENCE: Frisbee. AUDIENCE: More dice than you know what to do with. [LAUGHING] AUDIENCE: Map layout. AUDIENCE: Intimidation. AUDIENCE: Map building. AUDIENCE: Deck building. AUDIENCE: Army building. PROFESSOR: Oh, army building. I'm just keeping that up. AUDIENCE: Building building building. AUDIENCE: Meta building. PROFESSOR: Meta building? AUDIENCE: [INAUDIBLE] building. AUDIENCE: Recursively recursive. AUDIENCE: Damaged terrain. PROFESSOR: Damaged terrain. AUDIENCE: How about damage terrain? PROFESSOR: Wait, damage? What? AUDIENCE: You said damaged, didn't you? PROFESSOR: No, no, no, no. [INTERPOSING VOICES] AUDIENCE: Damaging terrain? AUDIENCE: Yeah, terrain that damages you. Damage terrain. AUDIENCE: How about terrain? AUDIENCE: Spikes everywhere. AUDIENCE: Non-playable characters. AUDIENCE: Playable characters. AUDIENCE: Too many playable characters. AUDIENCE: How about multiple roles? AUDIENCE: Climate. AUDIENCE: Shared player characters. PROFESSOR: What? AUDIENCE: Shared player characters. AUDIENCE: Growth. AUDIENCE: Solitude. AUDIENCE: Pain. PROFESSOR: Paint. OK. AUDIENCE: Decline. AUDIENCE: I said, pain. PROFESSOR: Oh, pain, OK. AUDIENCE: P a y ing. PROFESSOR: Oh, paying. AUDIENCE: No, I just said pain. [INTERPOSING VOICES] PROFESSOR: All these are valid! AUDIENCE: Listening skills. AUDIENCE: Mispronunciation. AUDIENCE: Telephone mafia. AUDIENCE: Knowledge of random pop culture facts. AUDIENCE: Mash-up. AUDIENCE: Trivia. AUDIENCE: Mash-up, whatever that means. AUDIENCE: Knowledge in general. AUDIENCE: Who gets the shiny. AUDIENCE: Onomatopoeia. AUDIENCE: There was another one over here. AUDIENCE: Disease. It was who gets the shiny. PROFESSOR: Disease. Who gets the shiny-- I can't believe I spelt that right? AUDIENCE: Tons of loot. AUDIENCE: Oh I said lewd. [LAUGHING] AUDIENCE: Time travel. AUDIENCE: Collecting too much. AUDIENCE: Double meanings. AUDIENCE: Resource management. AUDIENCE: Poison. AUDIENCE: Minimalism. AUDIENCE: Researching, I mean, sure researching. AUDIENCE: Racing. PROFESSOR: So what? Did I miss-- [LAUGHING] AUDIENCE: You wrote that earlier. PROFESSOR: What was that? AUDIENCE: Cannibalism. AUDIENCE: Robots. AUDIENCE: Settlements. PROFESSOR: Settlements? AUDIENCE: Yes. And hunting. AUDIENCE: Survival. Space. AUDIENCE: Pirates. AUDIENCE: Parallel worlds. PROFESSOR: Parallel worlds, and I think I missed something. AUDIENCE: Pirates. AUDIENCE: Exploring. AUDIENCE: Cool looks. PROFESSOR: Cool looks. AUDIENCE: Tabletop MOBAs. AUDIENCE: There is another one over here. AUDIENCE: Nazis. AUDIENCE: Zombies. AUDIENCE: Plants. AUDIENCE: [INAUDIBLE] AUDIENCE: Copyright infringement. [INTERPOSING VOICES] AUDIENCE: Copy left. PROFESSOR: OK, I didn't catch-- what did you say? AUDIENCE: Patent trolling. AUDIENCE: Patent trolling. AUDIENCE: Mercy killings. [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] PROFESSOR: Pets. AUDIENCE: Fear. AUDIENCE: We've already done fear. AUDIENCE: Let's do it again. PROFESSOR: Spin again. AUDIENCE: Music. PROFESSOR: Music. AUDIENCE: Fear. PROFESSOR: What was that? AUDIENCE: Fear. PROFESSOR: Fear. AUDIENCE: Sound, why not? PROFESSOR: Sound. Other things in games that you like or don't like? AUDIENCE: Wrestling. AUDIENCE: Forcing someone to change their moves. AUDIENCE: [INAUDIBLE] AUDIENCE: Shooting stuff at people. AUDIENCE: Fake difficulty. PROFESSOR: What? Fake difficulty? AUDIENCE: Listen and repeat. PROFESSOR: Listen and repeat. AUDIENCE: It would be better for everyone if we all just worked together. AUDIENCE: Do we have cooperation? AUDIENCE: Yeah. PROFESSOR: It would be better for everyone if we all just worked together. AUDIENCE: Competitive cooperation. AUDIENCE: King raising. AUDIENCE: Oh, yeah. AUDIENCE: King killing. PROFESSOR: That's a word for that. AUDIENCE: Stabbing. PROFESSOR: Stabbing. AUDIENCE: Designated psychopath. AUDIENCE: Social commentary. PROFESSOR: Social commentary. AUDIENCE: Prisoner's dilemma. PROFESSOR: Prisoners dilemma. AUDIENCE: Turkish delight. AUDIENCE: Morton's fork. PROFESSOR: What was it? AUDIENCE: Morton's fork. PROFESSOR: I don't know that one. AUDIENCE: You have two choices, either one will doom you, so. AUDIENCE: Fork. PROFESSOR: Fork. AUDIENCE: Doom. AUDIENCE: Death. AUDIENCE: Glory and honor. AUDIENCE: [INAUDIBLE] AUDIENCE: Permadeath. [INAUDIBLE] AUDIENCE: I guess permanent changes in general. Growing changes that last between games. AUDIENCE: Rule making. PROFESSOR: Rule making. AUDIENCE: [? Kettles ?] and [? ball ?].. PROFESSOR: Well, that's-- AUDIENCE: That's rule making. PROFESSOR: I'll just write it down anyway. AUDIENCE: Brainstorming. PROFESSOR: Had to come up to earlier. AUDIENCE: Competitive brainstorming. AUDIENCE: Meta-gaming. AUDIENCE: Respawning. AUDIENCE: Spawn camping. AUDIENCE: Yeah, that's a-- [LAUGHING] AUDIENCE: Possession. AUDIENCE: Tent building. Illegal possession. PROFESSOR: I think I missed something. AUDIENCE: Tent building. PROFESSOR: Tent building? AUDIENCE: Yes. AUDIENCE: Capture the flag. AUDIENCE: Fire building. PROFESSOR: Building. Capture the flag. AUDIENCE: [INAUDIBLE] AUDIENCE: You wrote it up in [INAUDIBLE] ? PROFESSOR: What was it? AUDIENCE: Confusion. PROFESSOR: Confusion. And you said? AUDIENCE: Hacking. PROFESSOR: Hacking. AUDIENCE: Confucius. PROFESSOR: Confucius? AUDIENCE: Yeah. AUDIENCE: Philosophical. PROFESSOR: I can't believe-- Philosophical. AUDIENCE: Limited board information. PROFESSOR: Limited? AUDIENCE: Information about the board. PROFESSOR: OK, board information. AUDIENCE: No talking. AUDIENCE: Progressive loss of mechanics. AUDIENCE: Screaming. AUDIENCE: Screaming. PROFESSOR: Screaming. AUDIENCE: I was thinking of-- [INAUDIBLE] AUDIENCE: Funny voices. AUDIENCE: Do we have voice in general? PROFESSOR: You do now. AUDIENCE: Bankruptcy. AUDIENCE: Progressive [INAUDIBLE] mechanics. AUDIENCE: Folding. PROFESSOR: Folding. AUDIENCE: Total domination. AUDIENCE: Last man standing. PROFESSOR: What was that word? AUDIENCE: Last man standing. PROFESSOR: Last man standing. AUDIENCE: The American dream. AUDIENCE: Completely dominated. AUDIENCE: Trying to find information. AUDIENCE: Manifest destiny. AUDIENCE: The rules are the game. AUDIENCE: Make the rules up. AUDIENCE: Charity. AUDIENCE: Game within a game. AUDIENCE: Cooperation where one person's secretly loses anyway. PROFESSOR: One person loses secretly anyway. AUDIENCE: Coopetition. AUDIENCE: What? AUDIENCE: Basic cooperative competition. Like I said earlier. AUDIENCE: Searching. AUDIENCE: Don't worry about it. AUDIENCE: Learning. AUDIENCE: Hiding. AUDIENCE: The game doesn't matter. AUDIENCE: Revenge. AUDIENCE: Torpedoes. AUDIENCE: Hiding from revenge. AUDIENCE: Hiding from the-- AUDIENCE: Torpedoes. AUDIENCE: Torpedoes. AUDIENCE: Multi-level boards. AUDIENCE: Not throwing dice at people. PROFESSOR: Which leads to throwing dice-- AUDIENCE: Anger management. AUDIENCE: Single player. AUDIENCE: Drinking. PROFESSOR: Single-player? AUDIENCE: Single-player drinking game. AUDIENCE: Single-player drinking anger management. AUDIENCE: Players too similar. AUDIENCE: If you can't beat them, join them. AUDIENCE: Unicycling. AUDIENCE: Therapy. [INAUDIBLE] PROFESSOR: What was that? AUDIENCE: Therapy and unicycling. AUDIENCE: And unicycling. AUDIENCE: Unicycling therapy. AUDIENCE: What was that? AUDIENCE: Cooperative unicycling. [INTERPOSING VOICES] Switching teams. PROFESSOR: I feel I missed at least one. AUDIENCE: No, I think you-- AUDIENCE: You combined the two of them. AUDIENCE: Morse code. AUDIENCE: Build-your-own code. AUDIENCE: [INAUDIBLE] AUDIENCE: Hidden messages. AUDIENCE: Language barrier. PROFESSOR: What was that? Language barrier? AUDIENCE: Yeah, language barrier. AUDIENCE: Mastermind. AUDIENCE: Horticulture. AUDIENCE: Game Master. PROFESSOR: Game Master. AUDIENCE: Mining. PROFESSOR: Mining we got. AUDIENCE: [INTERPOSING VOICES] AUDIENCE: Data mining. AUDIENCE: If its been said you can say it again. AUDIENCE: Pick a card, any card. AUDIENCE: 104-Card pick-up. AUDIENCE: Worst sequel ever. AUDIENCE: Worst sequel ever? AUDIENCE: Which of the 104 cards Is missing? AUDIENCE: Never playtested this game. PROFESSOR: What? AUDIENCE: Never playtested this game. [INTERPOSING VOICES] AUDIENCE: Playtesting. AUDIENCE: Player modification of game. AUDIENCE: Knocking everything over. PROFESSOR: What? AUDIENCE: Knocking everything over. PROFESSOR: Knocking everything over. AUDIENCE: No time to explain. PROFESSOR: Also a name of a great game. AUDIENCE: Follow the leader. AUDIENCE: Speeding. AUDIENCE: Take down the leader. AUDIENCE: The leader follows you. AUDIENCE: Oxygen affixation. PROFESSOR: Did I spell that right? Oh, good. Write AUDIENCE: Figure out who's following who. AUDIENCE: Predator and prey. AUDIENCE: Prediction. PROFESSOR: What was that? AUDIENCE: Prediction. PROFESSOR: Prediction. AUDIENCE: Prophecy. AUDIENCE: Fantasy. PROFESSOR: Fantasy. AUDIENCE: Invisibility. AUDIENCE: Getting roles. Getting roles expressed through avatar. Invisible avatar. PROFESSOR: Through avatar? AUDIENCE: Yeah. AUDIENCE: Moral choices. AUDIENCE: The world is ending, let's have fun. AUDIENCE: Moral ambiguity. AUDIENCE: Feedback loops. AUDIENCE: Mind reading. AUDIENCE: Immoral ambiguity. AUDIENCE: Mind control. AUDIENCE: Imitation. AUDIENCE: Immortality. AUDIENCE: Flattery. AUDIENCE: [INAUDIBLE] PROFESSOR: What was that real fast? AUDIENCE: I said we're going fast. AUDIENCE: When did we start doing this? When did we start? PROFESSOR: It's been almost 20 minutes, almost. AUDIENCE: Sniping. AUDIENCE: Prettiest princess. AUDIENCE: What? AUDIENCE: Highly adaptable. AUDIENCE: Discrimination. AUDIENCE: Who can eat the most food? AUDIENCE: Fashionista. AUDIENCE: Coffee. AUDIENCE: Illness. PROFESSOR: Illness? Did I miss one? AUDIENCE: Foiled plans. AUDIENCE: Saber and foil. AUDIENCE: Tin foil. AUDIENCE: Plane crash. PROFESSOR: Plane crash? AUDIENCE: MacGyvering. AUDIENCE: Archery. AUDIENCE: Assassination. AUDIENCE: Pirates. PROFESSOR: I think we had pirates. AUDIENCE: Yeah. Well I love pirates. AUDIENCE: Beating the professor. PROFESSOR: Beating the professor. AUDIENCE: Not physically. AUDIENCE: Secretly broke. AUDIENCE: Bluffing. AUDIENCE: Handicaps. AUDIENCE: Time travel. AUDIENCE: Time travel's on there. AUDIENCE: Someone said that in the future. AUDIENCE: Space travel. PROFESSOR: Space time? AUDIENCE: Space travel. PROFESSOR: Space travel. AUDIENCE: Multiple mediums. AUDIENCE: Slow and painful. AUDIENCE: Quick and painless. AUDIENCE: Quick and painful. AUDIENCE: Loyalty. PROFESSOR: Quick and pain-- what that royalty? AUDIENCE: Loyalty. PROFESSOR: Loyalty. AUDIENCE: Swapping seats. AUDIENCE: Fully drunk. AUDIENCE: Pulling seats out from under. AUDIENCE: Oh God. Running in circles. AUDIENCE: Dogs. PROFESSOR: Dogs. AUDIENCE: Cats. AUDIENCE: Sore losers. AUDIENCE: Flipping the board. AUDIENCE: Flipping part of the board. [INTERPOSING VOICES] AUDIENCE: Knocking over pieces. AUDIENCE: Air ducts. AUDIENCE: Organized mayhem. AUDIENCE: [? Ducks ?] in the air. AUDIENCE: Mischief. AUDIENCE: Finding your lost friends. AUDIENCE: Silence. AUDIENCE: Finding your drunk friend. AUDIENCE: Finding your [INAUDIBLE] friend. AUDIENCE: I did say [INAUDIBLE] AUDIENCE: Drinking your lost friends. PROFESSOR: What was that? AUDIENCE: Drinking your lost friends. AUDIENCE: Vampirism. AUDIENCE: Vampirism. PROFESSOR: What was that? AUDIENCE: Vampirism. DUI. AUDIENCE: Changing goals. PROFESSOR: I think we had that, but I can't remember. AUDIENCE: Giving up on your dreams. AUDIENCE: Mobile boards. AUDIENCE: Storytelling-- AUDIENCE: Mutually assured destruction. AUDIENCE: Storytelling that riffs off of Crayola colors. PROFESSOR: --on Crayola colors. AUDIENCE: Multiple boards, was it? AUDIENCE: Yeah. AUDIENCE: And what was it between? AUDIENCE: Mutually assured destruction. AUDIENCE: There you go. AUDIENCE: Knowing when to give up. AUDIENCE: Going outside. AUDIENCE: Well, that was a waste of time. [INTERPOSING VOICES] AUDIENCE: Last tile. AUDIENCE: 300. AUDIENCE: Last tile. Finally! PROFESSOR: OK, we have more than enough ideas. So I feel that one thing that we could do is sort of weigh in on which ones might fit the criteria of class better. That's not necessarily saying that any of them-- that we're going to identify-- are good or bad ideas. But it seems like if you were to do something on that you would fit the criteria pretty well. AUDIENCE: You want people to call out? Raise their hand? AUDIENCE: What are the criteria? PROFESSOR: Well actually I thought that we'd just identify it as instructors because-- some feedback for everyone. What we suggest today doesn't necessarily mean these are the only ones that you can work with. After this class is just open to you. You form a team. Come back on Wednesday with a team. Talk with each other. At the end of class you can stand up and say, I want to do something on-- OK I'm going to identify trading as one which I think is a mechanic, right? So if I want to do something on trading, and go and talk and figure out whether you want to be on the team. AUDIENCE: [INAUDIBLE] PROFESSOR: No more than four. I'm going to suggest at least as many people as you have players in your game. So if you are making a four player game, you need a four player team. That's why you need a four person team because it just makes it easier for you to test, slightly-- at least in the early stages. If you want to make a two player game, you need at least two people. But you can have four people in a two player game. More people than that it's hard to schedule and you go kind of slowly. But four people making a two player game means you can make two games, instead on one. PROFESSOR: Yeah, or try different-- simultaneous prototyping and riff on each other's ideas. Things that I see that fit the requirements-- discarding, path building. Building, I think we go into more specific stuff later, so I'm going to look at that. AUDIENCE: Hidden goals. PROFESSOR: Hidden goals. AUDIENCE: Manipulating time? PROFESSOR: Manipulating time. Manipulating time I think is probably more like combination of different mechanics. AUDIENCE: Yeah, more of a theme than it is a mechanic. PROFESSOR: Yeah. Theft I guess could be a mechanic-- picking something that somebody else doesn't want you to take. AUDIENCE: Decline? Like growth and decline, those two. PROFESSOR: Growth and decline. Ah growth! AUDIENCE: Further down. PROFESSOR: So these two, right? I think these two combined-- yeah, definitely something that-- actually, I think even separately is fine. Some sort of mechanic about slowly just rolling downhill, right? You could make a game just about that. Let's see. There's a lot of stuff here now that I realize how many ideas got thrown out. There's a lot of stuff here that does fit. So everything that I'm identifying here doesn't mean that these are the only things that fit. But just trying to give you some information about-- like voting I think can be taken in a million different ways. And that's exactly-- it's all voting. But how you do different kinds of voting can be explored in so many different ways in a game. AUDIENCE: Auctions? PROFESSOR: Auctions? Yes. Frisbee, I actually think it's a great-- it's a constraint like you can only do things with this one thing, right? And I think Tron only scratched the surface about what that could be-- although probably too dangerous but-- Terrain in general, all of these I think fit-- AUDIENCE: Especially if you were thinking of tiles. AUDIENCE: Or little armies shooting each other. AUDIENCE: Multiple roles is pretty close to it. It does need a little bit more what the roles are actually doing. AUDIENCE: Whether you choose from a selection of roles. AUDIENCE: Or having multiple roles. PROFESSOR: Having multiple roles simultaneously. I think you want to probably identify something a little more specific off the multiple roles basic idea. But you can do that in your team. Just scrolling down a little bit more. Racing is more like a whole genre. So maybe identifying something inside racing. A mechanic will be things like boosting, for instance, is a mechanic. I'm going to say, not Nazis, not pirates, not robots. These are things that show up in games and often the more often they show up in games the less interesting the game often gets. AUDIENCE: Resource management? AUDIENCE: Resource management. It's right above research, right above robots. PROFESSOR: Yeah, resource management, again, is kind of a category of mechanics. I think you want to identify a specific kind of resource management, whether it's like supply and demand-- I think there's already tons of things that you can do with just plain supply and demand. Trading also fits in resource management, right? But trading is more specific than resource management, I feel. So inside resource management there's a pile of different ideas, and you want to focus on probably just one. I want to be a little bit more specific on what I mean by that. I don't mean that your game should just have one idea. It means I want you to identify a game mechanic that can be played by different people in different ways through the course of the game. So there's a lot of different ways you can execute a trade. Then that's why I think that's a good fit. But the end result is still exchanging something with somebody else, right? So you can see that you're comparing apples to apples here. AUDIENCE: Guessing? Is a good one. PROFESSOR: Guessing? Yeah. Yeah, there we go. All the different ways you can set up-- like intuit ideas. That's one that needs someone to do a really deep dive in because if you just do a shallow guessing game, it's a shallow guessing game and it's not interesting. But if you really think hard about how do I-- all the different ways I can make guessing interesting-- Deck building is a good mechanic but really, really hard to do for any kind of assignment in this class, even in your last assignment. The reason for that is deck building implies lots and lots of cards. Lots and lots of cards makes it hard to iterate because you're generating a ton of things just to make a single prototype. You want to make a second prototype you're going to generate a new batch of cards-- time consuming. You're going to have blisters on your hands cutting things out. That's my warning about deck building. AUDIENCE: [INAUDIBLE] AUDIENCE: That game had deck building. PROFESSOR: That game was a kind of very, very narrow-- fairly elegant implementation of a deck building game. Yeah? AUDIENCE: A variation could be handled, but you only have seven cards that you can have at any one point [INAUDIBLE] PROFESSOR: Yeah-- things like deck building, card drafting, and everything-- anything that comes from the collectible card gaming world, you have to be very cautious of because of the difficulty it is to prototype something like that because it generates so much stuff each time you want to do a revision. Which is not to say it doesn't fit the criteria. It's just difficult to execute. Resource management definitely came up twice. Instead of MOBAs you might want to think of something like [? lane ?] pushing, or tower-- not tower defense the genre but defending a tower. AUDIENCE: What are MOBAs? PROFESSOR: MOBAs are games like League of Legends, Dota 2. It's a genre title for multi-- AUDIENCE: Multi-player online battle arena. PROFESSOR: --Player online. Yeah. It's a team based game, usually five a side. But these are all genre conventions. There's nothing that requires that. Bankruptcy is something that shows up in a lot of games but usually as a lose condition. It would be really interesting to think of all the ways that you can go bankrupt in an interesting way because reality is that actually there's a lot of them. In reality you can go bankrupt to your advantage. Permanent changes to the game. Difficult prototype again. But it's still possible. You're going to generate a lot of games that you're going to hand out to people. And the difficulty about it is that you don't really see the feedback until someone's played the game multiple times. So good idea, difficult to prototype. Again a lot of stuff here is great. I'm just looking for things I can comment on. Torpedoes is kind of really interesting actually. I know it's a noun rather than a mechanic. But there are things that torpedoes do that are not things that projectiles do in many games. They're stealthy things. They are launched from things that are already hidden, but they give away where they were shot from. All the countermeasures that are involved in torpedoes. Everything that has to do with something like code breaking, hidden messages, language barrier. I feel that exchanging messages in a way that you're hiding the message at the same time might actually be something that one team could actually just take on for this assignment. AUDIENCE: It also has the-- you can easily go shallow with it, with a goal to go deep. PROFESSOR: Yeah, try to identify something they can go deep in-- that you can do a lot of different takes within the same game on just that one mechanic. Same thing that has to do with any kind of clothing, changing avatar customization, and everything like that. That one's a bit tricky because usually that's in service of a larger game where the avatar customization has some varying level of utility. If you're going to do something like changing your clothes, the point is to change your clothes. The point isn't in service of some other game. In that game I think the point is in fact to change your clothes. And in fact that's changing clothes. If you haven't seen the game that I'm talking about it's called Dress for the Show. It was one of the games from last year. These could be difficult to prototype because of the cleanup process. You might want to-- yeah, it's difficult to play in class too so keep that in mind. This one's kind of hilarious. Anything that has to do with drinking, please don't make a drinking game in this class. I can't officially test it in class. And that would be-- I'd have to answer to the committee on curriculum. And I don't want to do that. AUDIENCE: They are good expressions of the guessing aspect of it though. PROFESSOR: What drinking games? AUDIENCE: No, the finding games. PROFESSOR: Oh, the finding games. Yes. AUDIENCE: And the drinking game, but-- PROFESSOR: OK, so hopefully that gives you some idea. Of all the ideas that have come out, some of these ideas fit what we're looking for. I'd say, if anybody wants to in class-- hey, I really want blah. Just go for it. I'm just going to copy and paste this into a file that's visible only to students in this class and upload it on our Stellar site. So, they'll either be in the announcements or in the handouts section so that you can refer back to this. The class email list is-- it should be game-design. But I haven't updated the list yet, I think. So let me just check this. Web Moira? Yeah. It's just game design. I suspect it has last year's students on it, which means it's not useful. Yes. It has last year's. S-P-14-C-M-S-6-0-8. That's our list. OK, I'll have to fix that somehow. I am going to add S-P-- I'm going to figure out exactly what the name of our email list. And then I'll put that definitely on the announcements page. So you can all discuss online as well. Otherwise, that's class so if anyone has anything to yell out-- AUDIENCE: Show up Wednesday with your team or pitch on Wednesday? PROFESSOR: On Wednesday you should come in knowing what mechanic or mechanics you feel like working on. If you have your team, even better. If you don't have your team by Wednesday, it's a very short amount of time, so I think it's OK. But you should have your team by the end of Wednesday. OK?
|
MIT_CMS608_Game_Design_Spring_2014
|
12_Game_Mechanics.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So let's do a bit of a discussion, we've got about 20 minutes. So these are the games that we just played. And I just want to have a discussion, again. What are the mechanics in these games? We've been talking-- I've gone to a lot of teams to ask about core mechanics, because that's really the thing that you're going to have to worry about for assignment one. Well let's talk a little bit about mechanics in general. Again, the definition that I want to work with-- at least for assignment one-- is a set of rules, could be more than one-- usually more than one-- that is going to allow a player to change the game state. Now I never defined game state. But anyone want to throw out how you would interpret game state? AUDIENCE: Like all the public and private information on the board right now. PROFESSOR: OK. All right, that's-- all the information, whether you know it or not, that could be the in the game. AUDIENCE: Yeah, for me, it's affecting other players. PROFESSOR: Affecting other players? AUDIENCE: Yeah PROFESSOR: It's-- all the informa-- what, what affecting other players? AUDIENCE: Isn't it like, what play is available to them, or-- PROFESSOR: The decisions are available to them, is one way to convey it. AUDIENCE: Sure. PROFESSOR: --to them could be one way to interpret it. OK, all right. I thought I saw another hand. AUDIENCE: I was thinking, I don't know how to explain it, but, yes, you're having a board or whatever type of meeting that you have issue to change. PROFESSOR: Right AUDIENCE: We go from, I don't know, something should move or if something doesn't move, then there should be a reason why it didn't move. PROFESSOR: OK. So, one way that I interpret that is that there has to be a variable. It has to be something that isn't necessarily the same every single turn, right? It's like the board of the Monopoly-- the layout of a Monopoly board-- It's not what I would describe as being something, part of the game state, because that never changes from turn to turn. AUDIENCE: So if you were to take someone elsewhere, say, across the country. Some groups of people have the same size as the group that's playing their game. And to send-- has each one of you send one telegraph-- telegram-- to one of them? If the minimum amount you need, the minimum stuff you need to tell them in order for them to be able to continue your game from where you were. PROFESSOR: OK, so you've got to evaluation criteria of whether you just fully described the game play. But I'll, yeah, all these are useful ways of thinking about what is the game state. What's the stuff that can change? Which is-- and what's all the information? Whether you know it, whether an individual player knows it or not. To be able to reproduce a state-- to be able to reproduce a game in progress and then be able to carry on, right. Sometimes things are unreproducible, like sports, for instance. If you try to halt the game halfway and then reproduce exactly the same weather conditions as the game it was-- PROFESSOR: It was halted. It is kind of difficult. But you know that if you really, really wanted to be able to continue that you have to make a decision on whether the weather is part of the game state. In sports, often it's the reason why the game was interrupted in the first place. AUDIENCE: Do you think at least in sports fatigue is part of the game today? PROFESSOR: Yeah, I think so, which makes it kind of difficult, right? It's, like, we're going to start the game at halftime, and then we'll pick this game up tomorrow, playing the second half. And it's-- that's a very different game from the typical one. [INTERPOSING VOICES] AUDIENCE: Instead, make it so that they have to start a different game, right, for the first half. PROFESSOR: First, exhaust yourself, or we'll throw those results out. So-- so [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: do something else with them. [INTERPOSING VOICES] PROFESSOR: Well, I mean, that's a problem. If you actually had-- and this does happen in, not marathons, but multi-day races. You have the situation where people get a good night's sleep before they resume the second leg of the race. And at which point they decided that fatigue is not the thing that they're going for. Why, they've tried to quantify your advantage or your disadvantage based on your start time the next day or something like that, right? I try to think up a competition that that would advance to that. I think that-- the Tour de France is not one day, right? It's multiple days. Yeah. So you have to stop. You have to sleep. You have to wake up, and then, depending on when you've reached the checkpoint, determines when you get to leave the check-- the-- yeah. AUDIENCE: I was going to say. I know, in cricket, they have multiple day competitions. And near the end of the one day, it could be dark [? for the gingham and like ?] [? clothlike ?] The game doesn't get finished for the day, and they'll put in-- they'll specifically put in... The teams strategy, basically trying to delay until the next game starts there. PROFESSOR: Oh. AUDIENCE: They put in-- PROFESSOR: So that their rules will sort of counteract these exploitative strategies. AUDIENCE: They'll like put in a pitcher who's younger. They'll put in a batter who's job is to not really swing it very-- isn't really to swing it very much, and all they're to do is trying to delay until the game ends for the day, basically. PROFESSOR: Yeah. I mean, it's weird, because you have a bunch of these strategies, and that there are probably rules that will come up to prevent some of the worst strategies from being put into play. You've got the same thing in baseball. Sometimes you put in these-- that there are late ending pitchers, right? That you will, and they're your starters and-- that try to achieve completely different things. So we've talked a little bit about game state, and we've played all of these games. All of you played a good portion of them and I have gone to team to team to try to talk a little bit about the mechanics. So let's just pick one that not that many people played so we can talk a little bit about it. A lot of people, I think Blokus will played once today, so by this group, right? So game state is characterized by-- AUDIENCE: [INAUDIBLE] location of pieces. PROFESSOR: Yeah, and the location of pieces onto the great board. But there is-- I think every player has the same set of pieces, just in a different color. And which pieces have already been played. So it's not just the location, but also which pieces have been played in which teams, which people are still available to them. Whose turn it is, because it's part of the game state. Because you take turns placing pieces, right? So-- so what's the core mechanic of Blokus? AUDIENCE: You place tiles. PROFESSOR: You select a piece from all of the pieces you haven't placed yet. And then you figure out where it goes. And the rule that you have to fit to meet, before you put it down? AUDIENCE: It has to touch the corner of a piece but not an edge. PROFESSOR: A corner of one of your own pieces. Right. AUDIENCE: But not an edge of [INAUDIBLE] PROFESSOR: Right. So you can't two pieces butting up like that, but you can have two corners touching each other. And you have to have corner pieces. AUDIENCE: And it doesn't matter where the pieces are on the board, where the opponent pieces are on the board except you can't put it [INAUDIBLE] PROFESSOR: You tried to find a little space to place, with a open space, that your piece will actually fit. Out of all of the pieces that you've got, which is part of the game state. And, of course, that changes game state by not only changing what pieces are on the board, as we described earlier, but also takes away a piece that you've already got. That's the core mechanic. In, I think, the next class, let me just check-- take a look at this again. Oh, wow, it's going to be February 19. No, it's going to be, holy cow, it's going to be a month before we get to chapter 2 of this book. Seriously? OK. For your sanity, you might want to read chapter 2 of Challenges for Game Designers, because it's five pages. It's not much. It's pages with illustrations on them. They go fast. Chapter 2 goes into a lot of more specific definitions. But they also talk about something called core dynamic, which is something I actually don't necessarily want you to worry too much about right now. They are right in identifying that, often, the core dynamics are actually more important than the core mechanics of the game. So the core dynamic probably has a bigger influence on what a player is going to experience. Whether it's some sort of crazy frenetic game where you're trying to screw over your opponent, or when it is animal versus-- animal upon animal, where it almost feels like a cooperative game at times. Even though there's a winner, you're all kind of playing together to not take the whole thing over. Those things often come out of the dynamics. We'll go into the theory. Some of you have already encountered this in other classes, the mechanics, dynamics, aesthetics theory. We'll get into that later into this semester. And that's just a very convoluted way of explaining why sometimes game design is hard. But the mechanic is the thing that you get to control as a designer. You get to write the rules. You get to decide what collection of rules that the players have to go through in order to be able to change the state of the game. And I want you to go through deep, deep permutations of what you can possibly do with one core mechanic. So in Escape it's very clearly rated easier. He's not a game designer. In fact, there's a little bio of him in Chapter 1, I think. Yeah. In the chapter there's a little photo of him, the guy who designed that game. And it's very clearly him trying to do everything that he can do with dice in a short design period of time. And when you see his other games later this semester, you will see that he's trying to do everything he can to with options. He's trying to do everything that you can do with design, and he's thinking about these game mechanics. And that's the thought process I want to go through. All right. So leaves us with just a little bit more time. I'm going to end this with the stupidest question in the world, which is, what is a game. What is a game? We've got 3 hours into a class, and we haven't actually talked about this yet. Why? Why? OK, maybe I'll take a step back. Why is it a stupid question? AUDIENCE: It's not a stupid question. PROFESSOR: Is it? Why did I describe it as a stupid question, maybe. That might be-- AUDIENCE: It feels kind of intuitive. Oh, I can look at something and say this is a game, this is not a game. But it's actually-- describe-- maybe there's some edge cases where it gets really difficult and then to actually define what is and what isn't gets really murky. PROFESSOR: That is definitely-- that is definitely true. But you have a counterpoint? AUDIENCE: Game. Usually something with a, I guess, set toll in mind given, I don't know, to build objective and rule. PROFESSOR: Objectives, rules, objectives or sub-objectives, goals, as you describe them. Some sort of constraints, some-- AUDIENCE: Like a mechanical restraint. Like an objective and mechanical [INAUDIBLE] I consider it a stupid question because it is. Everybody knows that a game is that's a ridiculous thing to say. Of course I want a game. But when we actually look at it? Then it does. It gets hard. It gets convoluted as you're playing. That's why it's such a dumb question. PROFESSOR: Again, so many edge cases, so many edge cases, but there are these things that a lot of games do share. These goals that you're working towards, these constraints that you are trying to work in, these decisions they're trying to make. AUDIENCE: They're definitely [INAUDIBLE] games, which have had like sub-objectives in them. Like another set of objectives, and it's hard to say like-- And [INAUDIBLE] like-- The way that the [INAUDIBLE] is like-- I don't think a game necessarily has to have [? real ?] goals. PROFESSOR: It's crazy to think of a game that people might recognize as a game that doesn't meet all of these criteria. AUDIENCE: So I think of it as something with a goal, an entertaining goal. So basically a goal that would be entertaining to obtain. But I also think that's very dependent on what you think of as a game. And someone mentioned that, oh, it's obvious. If you look at a game you feel like this is a game. It's not. But I think that it's not as obvious. I think with animal upon animal, everyone of us can look at that and say it's the game, but there are a lot of things that kind of fall on the edge. Some people will define this game where other people won't. [? Want To play ?] [? a game. ?] [LAUGHTER] Or my dad's really angry with me. I have to pickup all the cards. AUDIENCE: I personally like the vision of unnecessary obstacles, because it-- There's a play space that you don't have to be in it, but you should clean it and you should be giving yourself these obstacles to get over them or whatever. PROFESSOR: I think that's Bernard Suits who posited that and he likes to use golf as an example. Right, it's because clearly golf is an inefficient way of delivering a ball into a cup. AUDIENCE: [LAUGHTER] PROFESSOR: Yes, let's just use this very long stick that's weighted weirdly and then put the ball really far away. And give you a rule that you can't just pick it up and drop it in there. So unnecessary obstacles, it's kind of a nice way to describe a criteria that a game could meet. AUDIENCE: So it's interesting because the-- often times a lot of people, the creative people like artists and other such people, will say constraints actually let you be more creative. So that's, I think what's interesting about games is, specifically with strategies and stuff, the fact that you can't just pick up the golf ball and walk to the hole and put it in. It means you actually have to learn. Well, this golf club, it goes for long distances and this and that and that. So all these constraints yield different strategies, which is a really interesting sight. PROFESSOR: Right and even in the example of the cricket thing, right. It's-- you have this weird collection, a set of rules. We need to this emergent strategy, which is like-- And now we put in this particular kind of batter or pitcher that is going to help us maximize our-- what's going to extend this game by day. So why is-- it's not entirely-- it's not an entirely futile experiment to try to define games. In fact, I believe we do have a definition of games coming up in one of the rules of play readings, and it is one that is functional. And it has goals. It has the constraints that you're working in. I believe they also have it describing an activity that's actually carved out of-- of regular life. And the consequences that happen inside it don't include that you're outside it. But as many people have already pointed out in class, if I gave you a little bit of time you will be able to come up with a game that people will recognize as a game, and falls outside of that definition. The reason why I personally-- this is not-- this is not going to be useful to you outside of this class. So I'm going to tell you personally the reason why I don't like that question, what is a game, is because it invites you to carve things out of the game space. To say that thing is not a game because it doesn't meet this definition, and I find that activity to be pointless. To-- just telling people who've made something and have decided to call it a game that it is not-- that thing that you created is not a game. I actually feel that is very antisocial, very noninclusive way of thinking about what could actually be useful in thinking about it. But for your own benefit of working in a team, it is reasonable to set goals that you want to hit with you game. Many of the descriptions that have already come up in the past 10 minutes are actually things I would use to describe good games, but not necessarily games as a whole. Entertaining, for instance, it's like how many of you can think of a game that's not entertaining, but it's a game? AUDIENCE: Monopoly? [LAUGHTER] PROFESSOR: Sure. AUDIENCE: The game PROFESSOR: What else? AUDIENCE: War and [? golf are. ?] PROFESSOR: War. Golf. It depends on you, right? It depends on the player. Actually, a lot depends on the player. You can come up with the worst game in the world. Conversely, you can come up with the worst game and, well, and find people who would enjoy it. And you can make it a game that's trying to deliver a particular message. And people are going to walk away with completely the wrong interpretation of it. And that's fine, because they are players. They-- I'll say it's not even a completely wrong interpretation. They come up with a valid interpretation that they walked away with it, and you had very little control over that. But it's OK, because the game's more about players usually than about the designers. Even though you see the designers names on many of these boxes, a lot of them would have that, I think. In the end, there are relatively few famous game designers out there, but there are many, many famous game players out there. I think there's a good reason for that. All the creative-- all the creativity that comes out from constraints, all the, as we're talking about sports, the athleticism. The ability to work within these constraints to do something that people didn't necessarily think that was possible is usually the Hallmark card of a great player. The designers just provide it, the sandbox for them to be able to express that way. So I think when you come back on Monday, to be able to come in with a concept after taking a look at some of the readings. This is what I want to hit, right, and I want to have a game that-- that has a goal. Or I want to have a game where the players have a goal. I want to have a game where the players are going to express a certain amount of creativity. Or whether it's-- they're really just sort of mechanically going through the possibility space. I think we've got a couple of games that actually were very mechanical. And you're are just stepping through the-- stepping through the paces. [? Some ?] [? are ?] [? closer, ?] and some of them is almost a little bit like that, right, because you don't get to decide what question you ask, I believe. AUDIENCE: [You just draw the question. PROFESSOR: Yeah, you just draw it. You ask the question. You get information, and, at some point of time, you get all the information you need. So-- so that's a game that has very relatively little decision making on the player's part, if you are playing optimally. If you're not playing optimally, then actually you're making a whole bunch of incorrect decisions that actually makes it a little bit more variable in who's going to win. And that's why the game's actually interesting. Come in on Monday. Be willing to discuss that. Come in on Monday. Be willing to change your mind when you actually meet up with your team. We're going to do a little bit of brainstorming on Monday on what sort of games that you want to work on, and, more importantly, what mechanics you want to work on. One more clarification about the first assignment. It is not about the story of your game or the fiction of your game or where your game is set, in ancient history or in the science, sci-fi or anything of that. We'll get to that later on in this lecture. I want you to think about what are the rules that a player needs to deal with. Right? You are your own target audience, as far as I know. That's it for class today. Thank you very much. Please make sure that you did sign in, if you haven't gotten the attendance sheet. Be officially on it. [INTERPOSING VOICES]
|
MIT_CMS608_Game_Design_Spring_2014
|
4_Mechanics_Dynamics_Aesthetics_The_MDA_Framework.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. Some content in this audio lecture is not covered under our Creative Commons license. Please listen to the end credits for more information. PROFESSOR: OK. All right. So today's reading was the "MDA" paper by Robin Hunicke, Robert Zubek, and Marc LeBlanc. I hope you found it pretty light reading because it was intended to be pretty light. And the whole point of it was that we will be getting through this pretty quickly today so that you have time to sit with your teams, talk about your project, work on it a little bit. All the prototyping materials that are in front I'm going to be putting the boxes on the tables. Are you more or less seated with your teams right now? More or less? OK. Every team represented-- let's see. You are the building team. You guys are the-- which? AUDIENCE: I'm also builder. PROFESSOR: The different building team. AUDIENCE: The LEGOs, yeah. PROFESSOR: Right. The LEGO building team, OK. Which team is that? AUDIENCE: [INAUDIBLE] PROFESSOR: The what? AUDIENCE: Hidden information. PROFESSOR: The hidden information team. AUDIENCE: Thievery. PROFESSOR: The thievery, stealing stuff? AUDIENCE: The path building. Path building. PROFESSOR: Oh, you're the path building. OK. All right. So, all right, cool. I'll type it here, great. Hopefully we'll be able to make a little bit of progress on your teams on your projects. So who managed to play a little bit more of the games, whether it was the prototype that you played on Wednesday or whether it was something else you developed along the way. Who managed to get some playtime? OK, a few people. Who did you play it with? AUDIENCE: So we just went back to the drawing board and redid new games. PROFESSOR: OK AUDIENCE: Exploring. So we just played them amongst ourselves. PROFESSOR: Among yourselves? OK. Who did you play with? AUDIENCE: Just different people in the dorms. PROFESSOR: OK. Dormmates basically, right? Anyone else? Did anyone else do? Yep. AUDIENCE: I mean, we went through a couple more iterations-- PROFESSOR: Just within your team? AUDIENCE: Yeah. PROFESSOR Yeah. OK. Did you play an iteration that you didn't develop yourself? Did anyone? It's like, if you played within your team, obviously-- OK. So if you played a version that you hadn't developed yourself, or if you played it with other people who were developing the game with you, how do you-- what were some of the things that you felt about the game? Or what did people say about your game once the game was done? What did people seem to like or not like about it? What were some of the things that they were saying. AUDIENCE: It didn't end. PROFESSOR: It didn't end. OK, all right. What else? It could have been negative feedback. It could have been, wow, this is terrible because, blah. What was the blah? Or it could have been, wow, that was kind of cool because of blah. Nothing? Nobody gave you any feedback? They just kind of sat there, passively, why are you making us play this? [LAUGHTER] OK, well, I was hoping that that was a segue into a little talk about aesthetics, actually. Maybe what I'll do then is, I'm gonna make you play a game. A short game. The game that you will see-- who's played it before? AUDIENCE: The game on the screen? PROFESSOR: Yeah, Oasis. Also note it's Defense of the Oasis, which is easily confused with DOTA, but it is not DOTA. All right. If somebody would like to volunteer to play through an early easy level while there's a lot-- you know what? You all get a chance to play it through one round, because each round's pretty fast. So we'll start with you and then we'll go on then, OK? And then, everybody else pay attention because I guess he's pretty much playing the tutorial level. So this was only designed for the PC. It was actually one of the-- came out roughly-- you could just go ahead and play-- it came out roughly about the same time as casual games, downloadable, casual games, picking up on the market. I don't think it was ever sold as a boxed product, or if it was, that really wasn't the point of its business model. AUDIENCE: There were people following me. PROFESSOR: So you click around. AUDIENCE: I can probably figure out. Oh, this is really the point of figuring it out. That one. That happened. [LAUGHTER] AUDIENCE: Let's keep walking. 151. AUDIENCE: Already searched. [MUSIC PLAYING] AUDIENCE: Oh, it happened again Already searched. [MUSIC PLAYING] AUDIENCE: You should do something clever. [MUSIC PLAYING] PROFESSOR: So notice the trends are going down. AUDIENCE: Yeah. [MUSIC PLAYING] AUDIENCE: Water. [MUSIC PLAYING] AUDIENCE: They like music, don't they? [MUSIC PLAYING] AUDIENCE: OK. PROFESSOR: Big shiny thing. AUDIENCE: How do we increase population? [MUSIC PLAYING] AUDIENCE: Yay. [MUSIC PLAYING] AUDIENCE: Throw that towel. PROFESSOR: Look, notice this guy is down there. AUDIENCE: I am watching you. Oh, hey, hello people. [MUSIC PLAYING] PROFESSOR: So this actually turns out not to be the tutorial level. This just turns out to be the first level. AUDIENCE: The way most people play-- why don't you use the tutorial? PROFESSOR: That's true. People don't like playing tutorials, that's true. AUDIENCE: So are we supposed to find these four technologies. The barbarians are here? Why are they-- no, they're not. PROFESSOR: You have 22 turns and about two barbarian hordes are arriving. AUDIENCE: So basically, movements is turn, and I'm just walking. PROFESSOR: Yeah. AUDIENCE: So people-- I should walk to an area I've already discovered? Over here. Oh, I did something clever! I don't know what I did. [MUSIC PLAYING] AUDIENCE: Just when you search-- [INAUDIBLE] AUDIENCE: I don't know what to do. I'll just walk here and watch things go bad. PROFESSOR: Barbarians are coming. Tap to the city. Move troops there. AUDIENCE: Move here. Yay, 94 people. Can I move other people? AUDIENCE: Try. PROFESSOR: Try. AUDIENCE: No more troops. No more troops. No more troops. We're all going to die. 333. [LAUGHTER] AUDIENCE: Oh, I thought this was supposed to be me learning how to play. AUDIENCE: Yeah, learning. Feel the learning. [LAUGHTER] AUDIENCE: You have to use your technology. AUDIENCE: Yeah, can you use any? AUDIENCE: I'm tapping on technology. I'm tapping Hannibal. I'm tapping spear. I'm giving it all my love, and it's not doing anything. [CRASHING AND YELLING] AUDIENCE: I think I won. [LAUGHTER] AUDIENCE: Come on, man, you got this. Kill them. PROFESSOR: Whoa. AUDIENCE: What did they do? AUDIENCE: Oh, snap. PROFESSOR: Zot. [MUSIC PLAYING] AUDIENCE: What? PROFESSOR: OK. So you did, in fact, complete the level. But you're probably fairly befuddled about what happened here. How do you feel at the end of that level right now? AUDIENCE: Like I'm good at touching squares, but that doesn't mean much. Like all I knew was if I found cities I'd tap them again and I could search. PROFESSOR: OK. AUDIENCE: That's about all I found. PROFESSOR: OK. That's what you figured out about the rules, but how do you personally feel about this game right now? I mean, granted, you only played one level, and clearly the game goes on. AUDIENCE: I feel like it's somewhat impossible to control the ending situation because I have no way of determining or upgrading my population. PROFESSOR: OK. All right. So it feels like things are out of control right now. AUDIENCE: Yeah. Or like I have no influence, so I'm just one dude exploring the map. PROFESSOR: OK. So lack of influence maybe. Lack of influence, or possibly lack of control, something like that. All right, so next person. We're going from right to left. So you're next. You're going to play level two, taking in everything that you've seen so far. AUDIENCE: Oh, crap. AUDIENCE: Like you saw, you can tap on purple areas that are-- AUDIENCE: Purple area to explore-- so I tap that. OK. PROFESSOR: Anyone has-- any ideas about what he should be doing, go ahead and say so. He doesn't have to play this on his own. AUDIENCE: To the question mark-- AUDIENCE: I'm doing this. I want to discover every square. AUDIENCE: Just go to the question marks. AUDIENCE: Now that you know-- double-tap cities. AUDIENCE: Oh, double tap cities? AUDIENCE: Yeah just tap them again, you'll search them. Ta-da! AUDIENCE: Oh, you got-- [INTERPOSING VOICES] Was that good? Axe. AUDIENCE: That's probably good. I don't know what the axe does. AUDIENCE: Go to the question mark! AUDIENCE: He'll get there. He'll get there. AUDIENCE: Should I go straight to the question mark? AUDIENCE: Yes. Try it. No. AUDIENCE: But then I'm gonna [? cut myself off ?] and waste turns. [INTERPOSING VOICES] AUDIENCE: Every movement is one turn no matter how far apart it is. AUDIENCE: Oh. Really? No. [INTERPOSING VOICES] AUDIENCE: --do it in one straight. You have to go one purple square at a time. AUDIENCE: Yeah, but say I wanted to go over all the squares without [INAUDIBLE] place. AUDIENCE: You're gonna waste one turn per square you discover anyways. AUDIENCE: Yeah, it doesn't matter what order. AUDIENCE: Oh. AUDIENCE: You don't use a turn if you're not discovering? AUDIENCE: Oh, look at that-- discover right beneath it. That mountain does something. I don't know what it does but it gives you ten people. PROFESSOR: What do people think that does, where he put 20 people? When you click on the mountain, what do you think happened? AUDIENCE: It's not about 20 people. [INTERPOSING VOICES] AUDIENCE: --technology. PROFESSOR: OK. Go right ahead. Do whatever-- AUDIENCE: You can put in a lot of people. PROFESSOR: I don't know if that's good or bad. I waste turns. AUDIENCE: Not that [INAUDIBLE] to me. AUDIENCE: Clearly it's valuable. AUDIENCE: What else do we need people for at this stage. AUDIENCE: Keep running around. AUDIENCE: Whoa. AUDIENCE: Yeah. See, look. Now I'm still one turn. AUDIENCE: Wait, do I lose followers when I do that? [INTERPOSING VOICES] AUDIENCE: We're investigating. AUDIENCE: Yeah, I do. AUDIENCE: Oh! But I got something. I got archery. I like archers. AUDIENCE: Clearly because you have enough people. Bows increase your chances of afflicting damage on the barbarians. Good. Keep doing that stuff. I wonder-- it looks like there's-- AUDIENCE: Whoa, I'm huge. [INTERPOSING VOICES] AUDIENCE: Double tap the city boys! Hit the city again. Ta-da! AUDIENCE: Oh, look at that! I don't know that means but you found another technology. PROFESSOR: Looks like-- I have no idea what that-- oh, helmets! AUDIENCE: Slight improved defense [INAUDIBLE].. AUDIENCE: Oh, another city. Greetings. At least two cities survive, I will join your cause. That means this is supposed to be a campaign game and last over a long periods of time. AUDIENCE: What's that thing right there? I don't know. PROFESSOR: What thing? AUDIENCE: The tent looking thing. [INTERPOSING VOICES] AUDIENCE: Yeah, you can't double click it so you just wasted-- PROFESSOR: Over there? AUDIENCE: two turns. AUDIENCE: Dammit! Dude, good, another city. AUDIENCE: [INAUDIBLE] AUDIENCE: How do you add people? AUDIENCE: I should have searched more. AUDIENCE: It's everyone elses fault. Not yours. AUDIENCE: Oh, look that thing had people underneath it. AUDIENCE: Oh, you got more technology than I did. Very nice. [INTERPOSING VOICES] AUDIENCE: Put your troops to the city. Put your troops to the city. [INTERPOSING VOICES] AUDIENCE: --half that they're going to take? AUDIENCE: Two [? knights ?] are off. AUDIENCE: Because he's defining the second to last cities-- [INAUDIBLE] That's a lot of Trojans. Thirty are going to die. AUDIENCE: Oh. AUDIENCE: They survived some fatal blows. AUDIENCE: Helmet for falling rocks. AUDIENCE: All right, here we go. Let's do it! Go, go, go! Aw. What?! AUDIENCE: Boo! AUDIENCE: You shouldn't have put too many people on the lines. AUDIENCE: Oh! Barely survived. AUDIENCE: I won? AUDIENCE: Yeah. PROFESSOR: So how do you feel about that? AUDIENCE: I feel awesome, but I have no idea what happened. PROFESSOR: OK. AUDIENCE: The greater the number of cities that are connected together by roads, the faster they all grow. That dude built roads, and we don't know how. AUDIENCE: Maybe he-- AUDIENCE: Wait, did he actually? AUDIENCE: Yeah, he had roads before. AUDIENCE: Did he? AUDIENCE: Yeah, you saw him light up and connect. All those things were basically roads. You had the dudes moving along it. It's population increased in size. Our population has yet to increase in size. PROFESSOR: I think you [I? beat it ?] before class started. AUDIENCE: Yeah. I got a 15 dude. PROFESSOR: OK. So you feel awesome, but you're still not sure what-- AUDIENCE: I still don't know what's going on. I do know that exploring is very good. PROFESSOR: OK. AUDIENCE: I figured that one out. PROFESSOR: So there's some sort of exploring. It's good for something. AUDIENCE: I got zero points. [INTERPOSING VOICES] PROFESSOR: OK. AUDIENCE: And I got zero, too, but mine went from negative above. PROFESSOR: There's still a lot of confusion going on, OK. All right. All right. Next. AUDIENCE: It's happy confusion. [INTERPOSING VOICES] PROFESSOR: How do you build a road? PROFESSOR: I think it's showing you how the points are added. [INTERPOSING VOICES] PROFESSOR: So how was that round? AUDIENCE: It was fun. I wish that I could see-- I think there was a way to see which city they were going to attack first. I didn't know what it was. I forgot. Also, I feel like it's sort of pointless that you have to click on a city again to explore it. I always want to explore it, as far as I can tell. PROFESSOR: OK. All right. So you feel that there is some redundancy going on? AUDIENCE: Oh yes. PROFESSOR: OK. All right. All right, cool. Thank you very much. So these are kind of the things that, as players, you know, the sensations that you get. Right? You get kind of the joy of exploring, finding out new things, pushing back the fog. And the beginning of learning again, this confusion, this befuddlement. You don't actually know how your actions are changing things. But if you play a lot of games, you are usually sort of primed to accept a little bit of that. And sometimes it's balanced well enough that it sort of intrigues you rather than puts you off. But sometimes it just puts you off. Someone said that it felt good to save people. You described it as feeling kind of fun. In the paper there are some different ways how we can describe fun. It's just eight different types, but even in the paper it states it isn't meant to be a comprehensive list. It isn't meant to be a list that's exhausting every single option out there. It's just to show that there are different kinds of fun when you play a game-- like exploring things, or inhabiting a role, or overcoming a challenge, or just passing time in a way to alleviate boredom. Sorry, I have a bit of cold so my speech is a little bit stunted today. There are a number of game mechanics in here that generate some of these sensations. I want to put aside about confusion and lack of other influence right now, because some of that's addressed in the tutorial. If you went through the tutorial, it explains to you what all of the actions were. You probably wouldn't feel quite so confused about what you could do. However, I think it's reasonable to address the lack of control that you might feel, even after knowing what you could do. Right? These are the things that people would describe as aesthetics. The thing that the player feels. The paper describes them as desirable experiences. That's not necessarily always the case. It's not always desirable. I'm pretty sure that confusion was not a desired experience that the designer of this game wanted you to have. By the way, one of the designers of that game is actually one of the writers of this paper, which is why we're playing this game in particular. So these are aesthetics. Let's talk about some of the mechanics of the game execution. Hit the screen and bring it up. So let's talk about some of the mechanics of this game. Someone describe it. AUDIENCE: Turn based? PROFESSOR: OK, so turn based. The game waits for you until you make a decision. It's not real time. Is it not real time? AUDIENCE: It waits. PROFESSOR: Huh? AUDIENCE: It waits. PROFESSOR: It waits? AUDIENCE: Except for when you run into barbarians, right? Don't you have 10 seconds or something to-- Yes. Except for that. [INTERPOSING VOICES] The countdown doesn't start too quickly when you send your troops actually. AUDIENCE: At 10 seconds the line is still moving and you can collect more. AUDIENCE: They do that over the course of the game. AUDIENCE: Yeah, but like the countdown doesn't start until you send your troops. PROFESSOR: OK. All right, so the truth is until you make your final move, right? And then the 10 seconds where it counts down and you see things continue to happen. OK, all right. AUDIENCE: There's combat. PROFESSOR: There's combat as a mechanism. There's some sort of AI combat. Right? You don't actually directly tell your troops to attack. You just place them ahead of time, and then they just wail at each other, using some sort of algorithm, some sort of computation on how that number, of how that addition works. What else is in the game? AUDIENCE: Exploration. PROFESSOR: Exploration? Go ahead. AUDIENCE: [INAUDIBLE] How you find where the cities are or sorta like how you prepare for the attack. PROFESSOR: So how do you find cities? How do you find cities? AUDIENCE: You just have to walk on top of it. PROFESSOR: OK. So you click on the purple squares. And then that shows you what is underneath the purple squares. It will defog. It's not full of war, exactly, but they have a sort of unexplored fog, right? Which you have to disappear. Is that the only clue about how to find a city? AUDIENCE: Question mark. PROFESSOR: Sometimes you get the question hint. Right? And there's something there. AUDIENCE: There's like lush terrain around cities usually. PROFESSOR: Farms actually. It's hard to tell on the screen, but on the iPad. Do the farms-- all make the cities. So if you step on something green, you realize, wait a minute, there's a city somewhere nearby. It's kind of Minesweeper-esque in that way. What else? AUDIENCE: The oasis? PROFESSOR: What was that? AUDIENCE: The oasis. PROFESSOR: OK. So what about the oasis? AUDIENCE: You're also looking for it. As soon as you hit one of the squares of the oasis, you can more or less figure out where the other ones are. PROFESSOR: OK, because? AUDIENCE: It seems to always be in the same 3 by 4 rectangle. PROFESSOR: So the oasis-- AUDIENCE: You see like a corner of the river, then you know it's flowing this way and then that way, right? PROFESSOR: Yes. Well, the oasis is always a rectangle. So if you find a corner, yeah, you know that if you follow those things then-- [INTERPOSING VOICES] AUDIENCE: You know it's grass on this side, water on this side. So it's always inside. PROFESSOR: Right. So if the grid was like that, the moment you find one corner, then you know which way to explore. So the oasis gives you clues. The title of your oasis alone gives you clues. It is the same way that the farms give you clues about the cities. AUDIENCE: [INAUDIBLE] One of the mechanics is resource allocation, because we were trying to decide whether or not we should use technology, or to build roads, or outfit the cities with defense, or whatever. PROFESSOR: How do you allocate resources in this game? AUDIENCE: You tap it, I guess. PROFESSOR: You just tap it, and what happens when you attack? AUDIENCE: Certain actions [INAUDIBLE] PROFESSOR: So you have a limited number of followers, and tapping assigns them to spends them, basically, to get them to follow some sort of task. So limited followers, (I should have stopped it to [INAUDIBLE]). Tapping assigns. Assigns the followers. OK. Other ones? AUDIENCE: There's the scoring at the end? PROFESSOR: Scoring at the end. AUDIENCE: Which is a summary in some sense. It seems to be different from what actually-- like when you end the level, you get a feeling of, did you do well or did you do not well. And then it didn't always correspond with the score. PROFESSOR: Right. The score is unnecessary. So the score is kind of different from what the game is immediately giving you feedback on. The score is kind of like a completely different kind of feedback. AUDIENCE: I actually disagree that the score was like that. So I was very confused when we won the first level, because everyone died. But we got a score of zero, which kind of makes sense. And when I went a bunch of people died, but I still passed the level. And that made sense because I had two cities left. And I still got a score of zero because I lost tons of people. Then when we made it so we didn't lose a single city, we got a score of 1,000 or something We didn't lose a single city and we had a lot of people. I thought the score made sense, maybe the level progression didn't. AUDIENCE: Nathan lost some cities and still had a score of about a thousand or so. [INTERPOSING VOICES] AUDIENCE: You lost-- you didn't get any points, and you didn't discover anything. AUDIENCE: There you go. So the score makes sense. AUDIENCE: Also I was going to say that he said that the score was possibly accumulative over all the phases done. And the score for the foreground looked to be equivalent to the [INAUDIBLE] 1,530, if I remember correctly. AUDIENCE: No, but the total score said 3,600. [INTERPOSING VOICES] AUDIENCE: There was another mechanic. We didn't really get to discover it. But it was like glowing, O-A-I-S-I-S. PROFESSOR: Right. Clearly leading to something we haven't figured out yet. OK. AUDIENCE: There was a special follower that we got that kind of like stalked us. PROFESSOR: OK. I think they call them experts or advisors or something of that nature. What was the advantage of doing that? Having followers return? [INTERPOSING VOICES] AUDIENCE: Plus 20 for the beginning of the game. PROFESSOR: So she started and you had more. All right? AUDIENCE: Here's a question. Every level that we beat in the game we got a glyph. We had two of them along the side. It's along the same lines of [INAUDIBLE].. It's unclear what they did. AUDIENCE: It said that in one of the tutorials, you get all 12 glyphs, you win the game. PROFESSOR: OK. So there's some sort of glyphs equals win. Win? You know? Exactly how you win this game. Right? AUDIENCE: There were also these little blue orbs that got transported into the artwork. PROFESSOR: Right. AUDIENCE: And I have no idea what that does. PROFESSOR: So where did those blue orbs appear? Did anyone notice? AUDIENCE: When you were uncovering the tiles I want to say. AUDIENCE: But when you searched, if you didn't get one of those technologies at the top, you usually got a blue orb that would go up. Otherwise, I do not know what that blue orb does. He got lot of blue orbs. PROFESSOR: When you pick up the oasis-- when you walk over the oasis, you get blue orbs. So the oasis also gives you blue-- which makes sense. It's like the bluest thing on the screen, and then when you click on it, blue things move. So there's some sort of color coordination thing. AUDIENCE: The barbarians steal your blue. PROFESSOR: I know where this meme is going. AUDIENCE: Oh, I was thinking that the blue-- [INTERPOSING VOICES] PROFESSOR: Oh, yes, yes, blue. AUDIENCE: The one time they got to the oasis, something happened where everyone-- all the blue went to the barbarians. AUDIENCE: It looks like it was a last line of defense. [INTERPOSING VOICES] Steal your blue. PROFESSOR: So either the barbarians are taking away your blue, or you're using your blue to smite the barbarians. Either way you end up with no barbarians and less blue. Right. AUDIENCE: Do you think if that happened, we got down to zero blue, we would lose the game? AUDIENCE: Yes. AUDIENCE: That was my guess. There had to be some way to lose, but we couldn't figure out how. [INTERPOSING VOICES] PROFESSOR: You were at level four on like 15. AUDIENCE: 13, I think. PROFESSOR: I can tell you one of the things that this particular game designer has told me is that he doesn't believe in game balance. He just believes in making a game start so ridiculously easy and end at an almost impossible level. So somewhere along the way you meet the level that you're good at, and that's a balanced game. AUDIENCE: [INAUDIBLE] PROFESSOR: That's one way of looking at it. So we've identified a bunch of games mechanisms. We didn't go into things like roads and technologies and mines and stuff like that. There's a ton of game mechanics in here. These are the kind of things that a game designer can design, right? These are rules. Things as simple as the oasis is always in a rectangle. There's no reason for that. It could have been a river. But you're right. There is actually a rule in the game. Every time you find an oasis, it's always going to be of a rectangular shape. These are the things that a designer designs, and you get dynamics out of a bunch of these mechanics interacting with each other. So I'm going to give one example of a dynamic. One of the dynamics is that as you reveal information about the stuff that is underneath the fog, you start having more decisions to make. What's your next step going to be. Because you could click on the thing that you just discovered. Like if you find a city, one of the things that people were telling everybody-- search it right now! Right? But then you can also find a mine, and you're going to find followers into it. You could build a road, and some people I think might have noticed what road is this. Did people find out about anything about what road is this? AUDIENCE: They gave you more people, I think. PROFESSOR: Yes. AUDIENCE: I think one of the pop-ups said. PROFESSOR: Your cities start actually growing in number when you connect them on the roads. And of course that helps with defense in the long run. So one of the dynamics in this game is that the further that you play in a level, the more kinds of decisions you're going to make. At the beginning of the game, you can make no decisions besides which one of these two squares am I going to start searching. Up or left. I think you always start in one of the corners, so there are only two directions that you possibly search. What other dynamics are there? These are things that arise out of a combination of these mechanics. AUDIENCE: Would one be huge kind of pre-perform a set of actions, like your exploration, you're searching and whatever. And then, after that, you just kind of let it go, and the barbarians come and do their bidding? PROFESSOR: OK. So one of the things is you set up initial conditions for, hopefully, things to work out, for sort of the systems to emerge in the behavior, in the long-term behavior. OK. What else? That's actually a dynamic, in this case. That's something you set up even before all your turns are over. Like you said, people in a mine-- the miners just keep digging each turn. You don't have to babysit them. They'll keep digging. And they'll find stuff without you telling them to do anything. What else? What else fits this kind of description of something that arises still kind of mechanically out of a combination of all of these dynamics? Let's say combat, for instance. When the red forces and your blue forces start, you know, like little ants start facing off against each other. What happens? AUDIENCE: Like really snowballing. Like really snowballing. PROFESSOR: Right AUDIENCE: It would start off kind of even or so. And then all of a sudden one force would just take a nosedive. PROFESSOR: Uh-huh? AUDIENCE: At one point, the two forces were even, and then one got up to like 90, and one just started dropping and dropping. And this one just stayed at 90. You can see that too with the really small cities that have like 20 people or something that get like squashed. PROFESSOR: Squashed, right? So it has this interesting curve. On like a successful defense, the barbarians are usually here, and the followers-- you usually have a slightly lower number, because a huge number of barbarians that show up all of a sudden-- AUDIENCE: But we have technology. PROFESSOR: Yeah. But often what happens is that you've got a number like that, and it kind of goes like that. Right? It starts off kind of like the barbarians still maintain your numbers, then all of a sudden it just collapses. This is what a successful defense in that game looks like. AUDIENCE: But you can also flip that curve. Not necessarily flip it, but the followers can also take that kind of curve. PROFESSOR: Oh, yeah. AUDIENCE: Instead of just below the barbarians. PROFESSOR: Oh, yeah. Totally. That would be what an unsuccessful defense looks like. That will look more like that, and then it's like bleh. Right? There's usually more barbarians than there are people. So this is all arising out of the game mechanics. Right? This is not an aesthetic experience all by itself. This is just how the numbers, if you plotted by a fraction of a second by a fraction of a second, you'll get a graph. This is just the algorithm doing its thing. All right. What does this kind of-- I'm going to go back to a successful defense. So this arises out of the mechanics of the game. Everything that you do in the game to get salt, get followers into a city, get them technologies, mine stuff, search cities. Make sure there are enough followers by connecting the cities up with roads. The other thing that roads allow you to do is redistribute followers between cities that are reconnected. That's the thing that you get to do in those last 10 seconds is that you don't just assign everybody that you've got. All the people, go into this one city. You can actually do that and connect them up with roads. You have 10 seconds to do that. So everything leads up to how many followers you've got in a city that the barbarians are going to attack, right? You know there's this interaction that keeps going on until all the barbarians are destroyed. Then you get this curve. And what part of the aesthetic does this generate? What does the player hopefully experience because of that? AUDIENCE: You cheer for your forces-- PROFESSOR: Yeah. AUDIENCE: And then you feel disappointed when things start going bad, and you kind of feel for them. PROFESSOR: Yeah. And then when they pull out of it, because they're not like, yeah! At this point it's like we are both kind of in the same trajectory. Sure they're dropping, but we're dropping too. Things aren't going so well, and then all of a sudden your folks come through, right? We were cheering. We were shouting our heads off, as if our cheers had any influence on the outcome. I'm not quite sure how to describe that aesthetic, but that's what the player takes away from this game, right? There's this tiny little moment of the game. It's actually very finely tuned to generate exactly that experience. You're not quite sure what the outcome of this battle is going to be. I like to play the game for hours and hours and hours, and then you can kind of predict what's going to happen the moment the battle starts, because you already know what each technology agent does. But at this point in time when you're still learning the game, you don't know what each technology does. You don't know how the battles are going to turn out. It gets really, really exciting. So maybe there's a kind of-- how Marc LeBlanc, the designer of this game, described it was dramatic tension. I'm not quite sure if he's using that term exactly how it would be used in literary convention. But in this view it does feel tense, because there's an uncertain outcome about who's going to come out ahead, and then when it resolves in your favor, you feel great. That's basically one dissection using the MDA framework that we just went through. There's a whole bunch of mechanics that create a dynamic that creates an aesthetic experience. Which is the thing that you want your players to have. In this particular case, this is explicitly what Marc LeBlanc wanted his players to experience. He wanted them to cheer for the followers, even though that has no impact on the final result of the fight. So when you look at your games, I want you to be asking your play testers what they're feeling about the game. Usually after they're done. Say how do you feel about that, what did you like about the game, what didn't you like about the game. Both are aesthetic experiences that come out from the play of the game. This is much easier to do with people who are actually not in this class, because you're all thinking about the games from the rules level. If you go to somebody who plays games but doesn't really make games or isn't in the process of learning about how to make games, it's a lot easier for them to just think about well, I kind of felt crappy, because I didn't feel there was any way for me to come back in this game, you know. Or, yeah, I felt like I was ROFL stomping everybody. ROFL stomping is an aesthetic. Those are things that you can sort of take that to game designers, give you clues on whether your game is doing what you want it to do, for the mechanics that you're designing, whether the game is not doing what you want it to do, because maybe the mechanics are interacting in a way that creates a dynamic that you didn't want, or maybe the game is really not what you wanted to do and it's actually pretty cool. There's always the accidental discovery of your mechanics doing something that you didn't quite expect. Maybe you expected something to be kind of like cool and strategic and detached, but then your players end up feeling really tense and feeling that everything is on the line and everything could go wrong at any moment and they like that. OK. All right. That's something you can work with. But you may not necessarily understand that's what your game is doing if you're not asking your players what's the sensation that they're getting out of the game purely from the rules. When you're working with the prototypes in particular, it's very easy for you to make this connection back to your mechanics, because they can be experiencing terribly much from your artwork or from the sound effects, right? Unless you make your sound effects like a mouse, or something, and then it probably sounds pretty goofy. But they can only really get the aesthetic experience out of your rules. Obviously this is not the only way that you can convey an aesthetic experience. You can do things through art, sound, storytelling, characters, plot, and everything. We'll go into that a little bit more before assignment two, because you're going to be exploring that with assignment two. But there's this one particular kind of aesthetic, which I will refer to it in this class, as system aesthetics. The aesthetics that come out from the system of all of your mechanics working together. And, in this project, this first assignment, you don't have to worry about hitting the design aesthetic. You don't really have to. That's not the goal of this project. I want you to figure out everything that your game mechanic can do. And by the way, there was a question at the beginning of class, what if we wanted to change the mechanic because you discovered something along the way. That's fine. But whatever you change it to, make sure that you're going deep into that thing. Make sure that you're not just running with what you stumbled with, and then not developing it any further. Try to really, really explore what you eventually will declare that you're looking at. The reason for that is I want you all to have the experience of going really deep into mechanics and figuring out all the different things that one mechanic can do. So that later on when you all mix and match, and form them into assignment two and assignment three teams, you've got like a pool of knowledge and more importantly a pool of experience in sort of exploring what a mechanic can do. So you can take on new mechanics, you can share information with each other and use that later to generate interesting aesthetic experiences. Because in the end, this is what the players are here for. They don't care about the amount of work that you put into here. They only care about whether the game makes them feel the way that they want to feel right now while you're they're playing your game. So any questions? Nope. OK. Well we'll probably go a little bit more into things about mechanics later this month, but right now the rest of the class is actually time for you to work as a team. Some of you probably haven't seen each other since Wednesday and may not necessarily have talked to each other. So that's fine. Take time now. If you've got something that you need tested, you have two options. One, you can grab Rick and I. Or two, you can grab other people in the classroom who may not necessarily have saved their game out. Suggest asking people who haven't played a game for two tests. But tomorrow is actually the formal play test. So what you absolutely have to have Friday end of this class or what you have to have by the beginning of tomorrow towards that is something that can be played as a big class. Today there's no required play test. Tomorrow there is. All right? There's also reading for tomorrow, but it's not that bad. I think the total number is-- let me just make sure of that. Oh no, it's only part of it. Yeah. It's going to be just looking at one thing, which is the randomizing and add a functional dice in games. So, OK. Work in your teams. And all the materials right here we'll put it on the table. Contents from the following sources is copyright of the respective holders. All rights reserved. Excluded from our Creative Commons license. For more information, see ocw.mit.edu/fairuse. Oasis. PlayFirst. 2005. Video game.
|
MIT_CMS608_Game_Design_Spring_2014
|
11_Introduction_Overview_and_Syllabus_for_Game_Design.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: This class is called game design. It is not an intro class. How many of you have taken a CMS class before? OK, about half of you. All right, a number you haven't taken to CMS class before. It's actually a prerequisite, but I can really take as many students as can fit in here. So let me tell you why a prerequisite exists. The prerequisite exists because in a CMS class we typically expect people to do a lot of writing, we expect people to expect their grades to depend on the quality of their writing. We expect people to do all of the reading in class, and to be able to do it before class starts so that you can actually be prepared to have a conversation about that in class. Different departments even, different humanities departments do things differently. In CMS class, if you've taken any CMS class, none of this should sound surprising. You know, you've probably written way more in other CMS classes than you would necessarily write in this class. You're also expected to present. You're expected to come up in front of the entire class and say, and talk about your project. You should expect that your grades will depend on the quality of your presentation as well. So, one example-- You might design a perfectly good game. In fact an extremely elegant, beautiful game for the ages like Go or something like that, and you designed in this class, and that's great. But if your grammar is terrible in your rules, even though I can still read the rules and I can play the game, and it says, wow this is a really beautiful game, but your writing sucks you are going to lose grades based on that. That's one of the things that you should expect. There is a writing center over in building 12 I think, they are going to be moving in the summer, but right now they're still in building 12. Definitely take advantage of them. We are looking for concise writing, we are looking for precise writing, but we're not looking for, how should I put it, any more words than is necessary to get a point across. We want things to be fast to read. We want things to be easy for laypeople to read, not necessarily MIT students to read. You are going to see examples, many, many examples of very poorly written rules in this -- of games that are actually very well made. I actually kind of like this company here, HABA. How many of you have heard of this company HABA? They make little wooden trains-- toys, if you have any kid siblings, they may have gotten toys from this company. They make games for kids, and not for children under three years. That's the barrier. And the rules are written for, like six different languages, but for kids. They still expect parents to be able to explain how these games go, but you can imagine a six year-old picking up a game like this and being able to play it just fine. And this is actually a really elegant game and will be taking a look a little bit closer at this later today. That's kind of like where you should be aiming for with the quality of the writing. Try to get your stuff as clearly as possible, as quickly as possible over to as low a reading level as possible. There are a couple of tips that we have written, and that file is up on Stellar, and I'm handing that sheet out so you can go home and download the PDF yourself, and that's called the Rules Style Guide. Do I have a link for that in here? I don't have a link for that in here. I'll just put it on top of the Stellar announcement page. The Rules Style Guide is basically a two page document, that just gives you a bunch of tips on when to use bullet points versus numbered bullets for instance. Things that you absolutely should include, like a list components in your game-- so that when I get it as an instructor at the end of the assignment I can actually make sure that everything in your game was provided to me, so that I can actually play the whole thing. Give players a way to decide who goes first, don't just leave it up to the players, pick a player to go first. Pick something silly like, person who grew up closest to the sea. That's a real rule from [INAUDIBLE],, by the way. So, we do encourage inclusive language. It can be a little bit more verbose, but instead of writing-- instead of describing all players as male, we do appreciate it when people say he or she. It's fine to be able to flip between he and she, but, of course, not when you're describing the same player doing certain rule. They're not changing their gender in the middle of the turn. But the great way to be able to use language to describe player is player B. You can also just say the player, the player, the player. It is a little bit more verbose, but in that particular case there is a sort of higher goal in mind, that's to get across the point that games are for everyone. Especially the game that you're going to be making in this class. There's no particular reason why in this class, you should be making a game just for men to play, just for women to play. And right now it is too easy to fall into the trap that you just generalize that all players are male, and that's just bad. So don't do that. In fact, if you take the effort to do that, you will see that reflected in your grade. If you take the effort to use inclusive language. Let's see, what else. Attendance you have to attend every single class. If you are sick, and especially sick with something contagious, or sick to the point where you can't move, say you broke something, and you really should be resting that's fine to the email that before class starts. Now you know, if there was an accident or something and you had trouble getting to email that's fine, you can email us after. It would be nice to be able get some evidence from your doctor, but if push comes to shove I'm not going to be too much of a stickler on that. Things that I don't give automatic absences include being hauled away for a job interview, being stuck in an airport because you planed your spring break flight back too late or something like that. You have three free absences basically. You have three absences where you don't have to provide any excuse-- you don't have to tell me this is why you were gone. And that's what I expect. Now you can miss about three classes and I'm still quite satisfied that you've gotten everything that you need from this class. Illnesses are one of the few things that are really out of control, and can hit people for an extended period of time, and that wasn't necessarily your fault. Whereas something like being stuck in an airport usually only you for class, so it's not that bad. How many of are seniors on the job interview track? Save those three for things like job interviews, because you're going to need that. And you don't have to ask for permission or anything-- Just go for your job interviews and realize that you have three of those. If you do miss more than three classes, and it was not due to illnesses, then you may well get a reduction in your grades. I have students drop by a full letter grade. So they would have gotten a B, they got a C based on absences alone. All of the assignments, told them that they were going to be B average, but their attendance was poor. and as a result of that. Usually missing four classes doesn't drop you a full letter grade, but it really depends on how close you were to the tip off, like a B-minus and a C-plus before your attendance into account. There's a little bit more detail in the syllabus about how grades are computed. Plagiarism is usually not so much of a problem when it comes to the group projects, because you're going to be writing a lot of your rules, and everyone is keeping an eye on what's actually being handed in for your assignments. Keep in mind that I certainly don't want you to be copying wholesale from somebody else's rules. That's more than plagiarism. That might be copyright infringement. But when it comes to your own writing, there is actually a one page personal report for every time you hand in a team assignment. We are expecting you to keep a journal of every team meeting. A personal journal about how the team meeting went, what decisions you made, you can write down things like, wow, only half the team turned up and that meeting wasn't really useful. These are all useful things to keep in mind. You get to choose what you are what you want to write down your one page report. Again I'm looking for something that's well-written, reflective would be very nice. Based on the experience of doing this team assignment, what will I do differently next time. Or what went well that I want to make sure that I do again? Those things are very good to put down in your one page report. Write those on your own, don't go into it like a group writing session. You can compare notes that's fine. But when it comes down to writing it, I want you to write it on your own. How many grad students in here? One? OK, so grad students, if you're signing up for 608, that's no additional work. If you're signing up for 864, which gives you grad credit, then there is additional work. That largely involves checking out the games one week in advance, making sure that they're all punched, making sure all the pieces are in there, and making sure you know the rules. You don't necessarily have to learn the rules by reading the rules, you can't do it by watching play videos on YouTube. However it takes to learn the game, you it in class and you teach your classmates how to play this game, so everyone can get started a little bit faster. And make sure that all of the pieces are inside. That being said, everyone should be looking at the rules of these games when they're in class. You will see in the class schedule, I actually have all the games that we're going to be playing listed so that if the name of the game escapes you, you'll be able to Google it and look it up later. And you should be look at the rules because you want to see how the rules are written-- how good rules are written, how bad rules are written. And you're going to again see a lot of really bad rules. And you want to understand how certain things are conveyed. Maybe there's a really elegant game mechanic that you want to try to use a version of that in your own game, but you are having difficulty explaining it. Well, look at the rules of how other board games do it, and see whether they do a better job of it. Look at videos-- I'm one of the few people who definitely learn the games better by watching a video of somebody who's actually played it, or talking to somebody who's actually played it and having them shown to me. I have trouble parsing rules myself, but that's why I can really appreciate well written rules when I see one. So make sure that you are looking at the rules. As I briefly mentioned earlier, all the assignments in this class are team projects. There is again, that one page individual report that you hand in, usually the week after the assignment, and that one you write yourself, but it's all based on that team project. Which means you're going to be working on teams typically of three to four people. I do not recommend teams of five people in this class, simply because once you hit five you get into to certain communication overheads that makes it kind of counterproductive, so you can probably get about the same amount of work done with four people. Sometimes of all five people are living in the same living group you might be able to make it work. It's still really hard to schedule a meeting with five people all at once. Class does end at 4 o'clock. Usually that last hour is game playing time, which means you might be able to take a set amount of time to meet up with your team. It's even better if the hour between 4 and 5 o'clock is all free for every single member of your team, but that's not going to be the case for every single team-- for every single project. And you can switch teams between projects. Assignment one can be with a completely different group of people from assignment and two, that's fine. We're going to team formation, I think the schedule has it on Monday, yeah, because I'm not sure if this is the entire class yet. So on Monday we'll actually go a little bit more into brainstorming, talking a little bit about what projects you want to do, assembling teams around that, and then you're off to the races. You've only got three weeks to finish this first game. Is it three weeks? Yep, exactly three weeks, actually less than that because you start on the Monday and then hand in on that third Wednesday. So a very short amount of time, but that's OK because this is supposed to be very quick and dirty first assignment, get it out of the way and get into the groove of things. We're going to spend most of this month just talking about the discipline of making games. What is iterative design, what is rapid prototyping, what is game testing, why is it important and why you're going to be doing this over, and over, and over, and over, and over again all semester long. That's the one thing that I want you to walk away from class with, is being better at itterative design. Who here is familiar with-- has practiced some sort of iterative design from any discipline? How many of you are from an engineering background where you practiced that. OK, just to get a sense which sort of project kind of work that you came from-- AUDIENCE: Mech E. PROFESSOR: Mech E? AUDIENCE: I was in Creating Video Games last year. PROFESSOR: Yeah, Creating Video Games. AUDIENCE: Building a robot for the 6.270 competition. PROFESSOR: OK, yeah. Others? AUDIENCE: Software development. PROFESSOR: Software development. AUDIENCE: Creating Video Games last semester. PROFESSOR: Right, Yeah. I'll just leave the computer game design classes, very relevant experience here. This class is, of course, a good prerequisite for getting into CMS.611 which is our creating video games class. That class is about teamwork, is a lot more about obviously software because it's a video games class. That's going to be in the fall, so if you're interested after this class is over, you might want to check that one out, and you can talk to the students around here who've done it before. By the way, for those people who've actually been in classes with me I haven't gotten contacts, I'm just blind right now. My glasses snapped in half, and I have a three-year-old, things are happening. Actually I was playing StarCraft and then [INAUDIBLE] The final thing that I want to talk about all the way at the end of the syllabus, is this thing called a change log. Some of you have done CMS.611 recently will find this quite familiar. This is again one per group, not one for person, but you hand this at the same time as you hand in each assignment. And what I want you to do, is that you end up every time you have a design meeting, whether you meet in class, or you might outside of class on a weekend, even online or something, every time you make changes to your game write down why those changes written down in this format. We give you the sample format, we expect you to stick to it. You can tweak it, but we expect to see at least this much detail. What are the actions do in your games, what are the goals of the player, or players of course. What are the problems that your team is noticing after having played through this game? Turn takes too long, no one knows what to do, game takes too long, games over to quickly, game seems too random and out of our control. All those things, write down in problems. These should be useful for you as it is for us. If somebody misses a design meeting, you should be able to keep this on a Google Doc or some publicly shared document, and you can look at this, oh that's what happened last week. Oh, you change the wind condition to half what it was. OK I guess that's what we're working with now, why did you do that? And it says, Oh, because the game was taking too long or something. I expect to see that coming with your final assignment-- with each assignment as well. Any questions so far? There's going to be a bunch of guest lectures but we haven't labeled them in here because we haven't quite scheduled them just yet. The readings are the same, the games are the same. Some changes might happen to the schedule on Stellar. I will obviously make announcements on Stellar, I'll make announcements in class, I'll try to update the document on Stellar with schedule changes. But, for the most part, assignments are not going to change, just some readings may get rearranged. I have to point out something about reading. So, here are books that we're referencing in class. There's a few individual essays, but you're not going to have to read every single one of these back to back. For tabletop, this is actually available as a free PDF from the publisher, and the link of that is on Stellar. So, you don't have to buy the book you just download the PDF, scroll down to the reading and just read that. We have one reading from his book, but I actually recommend this book. How many of you have heard of the New Games Movement? AUDIENCE: New Games? PROFESSOR: New Games. It's from the '60s, so new is relative. The New Games Movement was something that-- OK, how many of you played with a parachute in gym class when you were in elementary school? OK, you were part of the New Games Movement. This was basically an activist group of game designers in the '60s, heavily influenced by the Whole Earth Catalog if you've heard of that-- Stewart Brand, yeah. Basically typical counterculture, but in the games side of things was very specifically about making games that feature different kinds of cooperative play. If they were competitive they were sort of light hearted competitive, and we're still trying to develop a sense of community. The parachute type games are very much about taking military hardware and turning them into peacetime fun activities. The parachute is something that used to drop soldiers right? It's a really neat book. You can see the title of A Playful Path to Wholeness. It could be some sort of New Age writing, but it's actually a pretty nice treatise on a particular kind of game design, which is how do you make games to bring people together, which is a perfectly fine goal when if you want to be a game designer. So, check that out. The Design of Everyday Things, you should graduate from MIT without having read this book. If you design anything mechanical engineer, software, anything that's meant to be used by a human being. Maybe some of you will work but robots might be able to skip this, but I would suggest not, I think you can read this too, and get a lot about usability, accessibility, how to make good user interfaces. We have two readings from here, these are PDFs that are up on our Stellar site. Let me leave these two follow-ups for last. The Oxford History of Board Games-- we actually have a lot of readings of it from here because it's relevant to our final assignment. And what it is, is a surprisingly readable list of traditional board games that maybe you've heard about 10% of these. It's a really interesting analysis, and the reason why I prescribe this book is two reasons. One, you need this background knowledge to be able to execute assignment three. Second reason, is that it's a good example of what game historical analysis looks like. So, if you're ever interested in going to academia and doing this sort of writing, you at least now have a sample of what that looks like. It's not as dry and stodgy as you think, it is about game play after all. But we do have a lot of readings, this book is available on reserve in the library. We're going to try to get more scans of this up on Stellar. I'm not entirely sure what I'm legally allowed to put that many scans up on Stellar. The hard copy itself is there. I don't think you can even buy this book anymore. This book has been out of print for a long time, so I'm going to try and increase access to this, but the best I can do right now is that it's on reserve in Hayden Library I believe. Finally, these three books, you might be familiar if you've ever taken any of the game lab classes-- Game Design Workshop, Challenges for Game Designers, and Rules of Play. The reading for today is actually chapter one from Challenges for Game Designers. All of these books are available online, through MIT Libraries web portal. The trick is you need to be on a MIT subnet. How many of you live on campus? OK, you're not going to have a problem. Just got our Stellar site, there is this thing called the books 24/7 link on the Stellar, you click on that first before you read any book, that basically does some authentication to make sure that you're actually an MIT student, and have a browser with cookies it should work just fine. And then you click the book as you want to read, you'll get a table of contents, you jump to the chapter, you're reading of the book in HTML. However, if you're not on the MIT campus, I think you have a browser with cookies you're still fine. But, you might have better luck by actually being on MIT subnet. You do exactly the same steps that I said. You go to Stellar, you click on the books 24/7 link, it authenticates you, and then you can read the book-- you can even print it out. But, if you do have trouble reading it for any reason, please let me know because we're going to have a lot of reading from these three books over the entire semester, so let's try to fix that for you as quickly as possible, and then you won't have any trouble with that. It's free, but it's a bit of a hassle. That's basically it for the formalities. Again, any questions about how the class is organized, how grades are assigned? It is a project based class, which means your grades are going to be heavily dependent on the performance of your teammates, so get along with your teammates. Try to do good work, but it's-- your grade is not going to be determined by how fun your game is. This may be coming as a surprise to some of you. Fun is one of those things that is really elusive, even if you know what you're trying to design for, you might not get there. Especially in the constraints of doing three projects in one semester. As someone who has designed a lot of games, both in MIT and outside, I fully acknowledge that you might be perfectly disciplined and do everything that I tell you and still end up with a very unfun game. So I'm not going to grade you on the quality that. I am going to grade you on your ability to listen to feedback, from feedback that provided, feedback that you're getting from the testers, your ability to do things like regular testing sessions, the ability to stick to the discipline of iterative design. If your game is mostly not playable until the last week, and then suddenly you pull some sort of marathon effort to try to get everything working together in that last week, you should expect a good grade. That's not what I'm trying to teach here. You should be constantly iterating on these games every single week. And if you do that you should count on a fairly reasonable, fairly good grade. This is a B-average class though. So, typically when I grade that's how it ends up. I'm not actually grading on a curve, It is possible to have an A-average class if everybody does incredibly well, but typically, in the past couple of years it'll always be B-average. Let me start with the easy question. What are people playing nowadays? What are people in this class playing, maybe not right now with this instant, but-- AUDIENCE: You mean like computer games? Or any game? PROFESSOR: Any games, board games, card games, computer games, AUDIENCE: League of Legends. PROFESSOR: League of Legends. AUDIENCE: I got a lot of people on [INAUDIBLE] AUDIENCE: Ascension: Chronicle of the Godslayer. PROFESSOR: On Ascension? Were you going to say Ascension? AUDIENCE: I was going to say Dominion. PROFESSOR: OK AUDIENCE: You should learn Ascension. Settlers of Catan. Yeah. AUDIENCE: Settlers. AUDIENCE: [INTERPOSING VOICES] So when you-- PROFESSOR: Dominion. Adventure Quest? AUDIENCE: Wow. PROFESSOR: Wait, wait hold on. What's Adventure Quest? I haven't heard this one. AUDIENCE: It's like this online game. Yeah. Always-- [INTERPOSING VOICES] PROFESSOR: It's like-- AUDIENCE: I guess like, it's a lot of-- [INTERPOSING VOICES] PROFESSOR: So it's an adventure game. [INAUDIBLE] Cool. OK. So we've got Adventure Quest. That one I just shortened to be Trail. AUDIENCE: Smash Brothers. PROFESSOR: Smash ... AUDIENCE: Liars Poker. PROFESSOR: Wait, hold on which version of Smash? AUDIENCE: I like Melee. AUDIENCE: Everyone in my house wants to play Brawl. PROFESSOR: So Melee is GameCube right? AUDIENCE: Yeah. That was the first one I ever played. Liars Poker. PROFESSOR: What? Liars Poker? All right. AUDIENCE: Normal Poker. PROFESSOR: Wait hold on. What's normal poker? AUDIENCE: I guess Texas hold'em. PROFESSOR: Texas hold'em poker. OK, all right. AUDIENCE: Innovation? PROFESSOR: The card game Innovation? Actually made locally. AUDIENCE: Alien Frontiers. PROFESSOR: Yeah. AUDIENCE: Alien Frontiers. PROFESSOR: Alien Frontiers. I'm not familiar with this one. This is-- AUDIENCE: It's an out of print board game, Euro style. I was going to say Empire Builder, but you already have that on one of the games that you own. PROFESSOR: I'm not saying, what games do I have. [INTERPOSING VOICES] AUDIENCE: I know, but I want that game, so I'm a little jealous. PROFESSOR: Well I'll let you try it out. AUDIENCE: Zombie Apocalypse. PROFESSOR: Zombie Apocalypse. AUDIENCE: Oh yeah. Munchkin? RoboRally One day I'll be able to add Nymph to this list. PROFESSOR: What's Nymph? AUDIENCE: The board game that came out on Kickstarter [INAUDIBLE]. It's going to be delivered sometime this month. That's one of the most exciting. PROFESSOR: I missed something after Zombie Apocalypse. AUDIENCE: Munchkin. Robo Rally. PROFESSOR: Robo Rally, wow. How did people get exposed to Robo Rally, just out of curiosity? AUDIENCE: I saw it at an internship. They just had a bunch of board games on the wall. PROFESSOR: OK AUDIENCE: My brother played it somewhere, and then I bought it for my [INAUDIBLE].. PROFESSOR: Very MIT-ish game, it's a game about programing [LAUGHTER] PROFESSOR: Programming and robots beating each other up. AUDIENCE: Cards Against Humanity. AUDIENCE: Hex Hex. PROFESSOR: I know Hex one, but what type of Hex? AUDIENCE: Hex Hex is a game where someone casts a hex, and then everyone has a certain number of cards that just tell you what to do with the hex, like pass it left, and pass it right. And then who ever has the hex in front of them has to deal with hex as one of their cards. If they can't then they're hexed, and then they chance to play cards that say, play one hex. PROFESSOR: I hope that's a hexagon involved in there. AUDIENCE: Yes. PROFESSOR: OK, cool. [LAUGHTER] AUDIENCE: We Didn't Playtest This. PROFESSOR: That's also local. The same people that did Innovation. Didn't Playtest This. There's a game called We Didn't Playtest This At All, and We Didn't Playtest This Either, right? AUDIENCE: I want to try those. Air Baron. PROFESSOR: What? AUDIENCE: Air Baron. PROFESSOR: Care Bear? [LAUGHTER] AUDIENCE: A-I-R, air, as in the sky, baron, as in not a noble or the lords of a baron. PROFESSOR: Oh, Air Baron. AUDIENCE: Yes. PROFESSOR: We should make a game called Air Bear. [LAUGHTER] PROFESSOR: This is digital? AUDIENCE: No, Avalon Hill, 1997. PROFESSOR: Oh, OK. AUDIENCE: Yeah. Resistance? Yeah. Resistance. PROFESSOR: This is like a Mafia like game right? AUDIENCE: Yeah, love that game. Coup? PROFESSOR: Clue? AUDIENCE: Coup. C-O-U-P. PROFESSOR: C-O AUDIENCE: U-P PROFESSOR: Oh yeah. Lots of love for this one. AUDIENCE: It's the same people who made Resistance. It's kind of like Mafia, except instead of having night and day places, what they have is people take turns. They have two different roles, and your role lets you take actions. So what you do is you lie about what role you have in order to take actions that you wouldn't have access to. Overall that mechanics are, you make money, and then you pay money to kill other people. When you kill someone they lose one of their roles. The way you lose rolls is killing someone, or by lying and being caught, or by challenge someone and being wrong. Like Mascarade. PROFESSOR: It's a little like Bluff. Say what? AUDIENCE: It's like Mascarade. PROFESSOR: OK, Yeah. Actually, we could add Mascarade to this here as well. So far I asked a couple of people to describe the games that have come out. And you can see it's not really all that difficult to say this is what's interesting about the game, it's kind of like this game then it's got these changes. That's a normal way of communicating. You're going to have to do that multiple times this semester for your own games. But, often what happens is that when students come up and talk about their game, they just start from step one-- Well, this game is a card game that has 52 cards, and is going to be played with four players, and it's like, can you like, cut to the chase and a little bit faster? The way that you just. The way you just described, you know it's kind of like Mafia but you make these changes. We will want those details especially when it's written, or when we're actually sitting down to play. But when you're coming up here and talking about your own game, give us those elevated pitches that you just did. If you think of this game as a little bit like chess, but with different pieces or something like that. And then you cut in, and then you can go into greater detail about what your game is. When you give a presentation though, usually what I'm listening for isn't so much, what is your game right now, because I can actually read that. What was your game before it became the thing that is now? Because I don't necessarily always see that. I'll see various play tests, but I may not necessarily get to every single thing. I want to know how you made the decisions that you did in order to end up with what you finally submitted. And I'll get all the details that I need from the final submission when I actually read it. So when you give your presentations, talk about the history of your game, three week history of your game, or the four week history of the game. So, these are the games that people have been playing. A lot of board games, couple of digital games although I think I'm seeing-- how many of you are playing these games digitally? Any of these board games playing in digitally? AUDIENCE: Physically and digitally? PROFESSOR: Yeah. On the phone, yeah? AUDIENCE: I think we [INAUDIBLE] Dominion web application. PROFESSOR: Web app? [INTERPOSING VOICES] PROFESSOR: Your hand was up, yes? AUDIENCE: Ascension, I think it's on the iPhone now. PROFESSOR: Yeah, a lot of these games actually do have iPhone and iOS parts, Android parts. AUDIENCE: Settlers. Just not fun at all. PROFESSOR: Oh, because you don't-- yeah. Why? AUDIENCE: Because it's more fun to yell at people. PROFESSOR: Because it's more fun to yell at people? It's more fun to guilt people into giving you trades that are beneficial to you. We often refer to that as over the table talk. Or, the over the table interaction. That's something that when you're designing a board game or a card game, or even a live action game, I want to say that you're allowed to design live action games in this class. Games where you're not actually sitting down, but you're actually moving around. Mafia kind of sits in a gray area. AUDIENCE: I think Risk is the perfect example of that. Because video Risk is an entirely different than board Risk because you can't cheat. Cheating is an integral part of Risk. PROFESSOR: Describe the tactics of cheating in-- AUDIENCE: You hide cards in your shoe. You move people's armies to places that they weren't on before. PROFESSOR: That one I've seen. AUDIENCE: You add people to your places. PROFESSOR: But not like re-rolls, right? Because those are hard to get away with. AUDIENCE: Monopoly is the worst game ever. PROFESSOR: Because of the cheating, or because of the way how the game is made? AUDIENCE: Because cheating. And because it takes like 12 hours to play. [INTERPOSING VOICES] AUDIENCE: --you get the best game, right? Play Riskopoly. PROFESSOR: So you send [INAUDIBLE] into combat with [INAUDIBLE] or something like that. [INTERPOSING VOICES] PROFESSOR: I think that's it. Well, OK. So let's briefly talk about Monopoly then. How many people here like playing Monopoly? OK. AUDIENCE: [INAUDIBLE]. PROFESSOR: How many play Monopoly regularly, like with families? AUDIENCE: [INAUDIBLE]. PROFESSOR: What are the good things about Monopoly? What do people like about Monopoly? AUDIENCE: Making bank. PROFESSOR: OK. All right. There's this real sense of progression there, right? AUDIENCE: I like putting four houses on one property. PROFESSOR: That's kind of like the real estate version of making bank. Sure. AUDIENCE: When you have a long row of properties all in a row and somebody's like heading into them. And you're like, OK, you're not going to get through here without getting on your knees. PROFESSOR: Those little plastic bits mean everything when-- OK. AUDIENCE: It's family friendly. So people like it. The reason people like is that a little kid can just win. Can do really well and win based off the money. PROFESSOR: That's kind of like a minimum age where they get the basic strategy, and then past that point, it's kind of like anyone's game. AUDIENCE: So an anecdote. I played with my family once. The first and last time I'll ever play with them. My sister, she's a year younger than me, and for some reason always has it out for me. And so she was about to go bankrupt and my dad made a deal with her, basically. So any time you make a profit, you give me two thirds of it and I'll make sure you never go out. I was like, now I'm playing against two people. I don't know what I'm supposed to do anymore. PROFESSOR: That situation has been described as king-making, where a player who isn't going to win decides who's going to win. And it can be sucky if they're not the person who-- AUDIENCE: Basically. AUDIENCE: I actually really like that-- kind of what he said, but in a different light, that it seems like anyone can win. But the more you know about the game, the more you actually start to understand not anyone can win. And in any game where you're rolling a thousand times, there's very little luck involved. PROFESSOR: Because it all averages out. AUDIENCE: So yeah, it does allow you to implement a lot of strategy, even though just about anyone can play it and understand the rules [INAUDIBLE].. PROFESSOR: How many of you play Monopoly with sort of over-the-table bargaining? AUDIENCE: All the time. PROFESSOR: All the time? AUDIENCE: Yep. PROFESSOR: Who doesn't? Like only the rules? Only the things that-- AUDIENCE: I do only the rules. PROFESSOR: You do only the rules. But do the other people around the table also do only the rules, or will they play-- AUDIENCE: [INTERPOSING VOICES]. PROFESSOR: Things like, I was going to give you money, things like that. AUDIENCE: They always make me offers that I don't want. I'm just like, skip it. PROFESSOR: There is something that a lot of people do in Monopoly and that's put money in Free Parking. AUDIENCE: Yep. Have to. PROFESSOR: Never do that. Never do that. Game gets four times longer. Never do that because the game is designed to-- the rules of the game at least was designed to suck money out of the system. There are inflows-- things like the collect $200. Things like the certain Community Chest and Chance cards. But the game is also designed to take money out so that people go bankrupt. By saying, I'm now losing this money, but I'm now putting it on this square. And if anybody happens to land on free parking, they get all this money. That money hasn't actually been sucked out of the system. It's just waiting to be redistributed to somebody else. And that means people take a very long time to go bankrupt. And if everyone's complaining about how long the game takes, you might want to check if you're actually playing with that rule. And if you are and you read the rules, it's not there. It's not there at all. It's the sort of thing though that does make it a lot easier for somebody who is about to go bankrupt to get back into the game. So now we're back to this whole family-friendly thing, which is-- if you are not that great at playing this game, or maybe even taking it that seriously, you still stand a chance. There's a couple of other things about Monopoly. It's a decent game if you're not at the table all the time. Say, at Thanksgiving or something, and somebody is watching the game or making sure that the turkey doesn't catch on fire, or something. That needs to stand up and check, and then come back to the game. Everything that you need to know about the game is right there in front of you at the moment of time when you need to make your decision. There's very little-- I don't think there's any hidden information, except for the amount of money that people have. And that's usually fairly easy to just see. Some people get very secretive, but most people don't. Monopoly is an incredibly successful game. I don't think this is a contentious issue. It makes a lot of money. A lot of copies get sold and bought. And there was some ambivalence here about whether it was a good game or not. Why does that game do well, do so commercially well? AUDIENCE: Because it's easy and well-known. PROFESSOR: Is it easy? AUDIENCE: Sorry, easy to pick up. Not necessarily easy to win. You can take 10, 15 minutes to explain it, and then you play a game. PROFESSOR: 10, 15 minutes to explain if you already know the rules. AUDIENCE: If one person knows. Just explaining to another person. PROFESSOR: But chances are somebody does know the rules. Somebody in that room probably has played Monopoly. AUDIENCE: Because it's so well-known. PROFESSOR: Because it's so well-known. [INTERPOSING VOICES] AUDIENCE: It's sort of like the classic that-- it's an old game that exits before many other games. Also, it's easy to play because you don't make decisions, really. PROFESSOR: Yeah. AUDIENCE: It's easy to play without making a decision. Like, roll the dice and move. Roll the dice and move. PROFESSOR: If you can afford to buy it, buy it. That's the decision-making. It's not a decision. It's strategy. AUDIENCE: I hate it when people make long decisions, like do I buy it, for like 15 minutes. PROFESSOR: And it's like, no. If you can afford it, buy it. All right. So there's a lot to do in the game, though. You roll the dice. You pull cards out. You move your little dog. You can make bargains with people over the table. You pick your little houses and you put the down and you turn into hotels. So there's a lot of busy work that you get to do without having to make a single decision. So you feel very active while you're doing it. Any other ideas about why it might be popular? We already talked about how well-known it is. AUDIENCE: They sell different versions of the same game. If you want one that's Star Wars-themed, [INAUDIBLE].. PROFESSOR: Trivially reskinnable. I have Monopoly Singapore somewhere. It's where I'm from. They just renamed all the properties. All the amounts of money are exactly the same as your standard Monopoly. I think there are some more clever versions of Monopoly out there where you add in a few new rules. But you're right, you can easily sell somebody who has a monopoly a different copy of Monopoly. But is that person buying a new copy of Monopoly? If you already have a copy of Monopoly, are you buying yourself a new copy of Monopoly? AUDIENCE: I guess you're buying yourself a new skin of Monopoly. PROFESSOR: Are you? Does anyone actually pay money for that? AUDIENCE: No. PROFESSOR: I think you get it. AUDIENCE: They're good gifts. PROFESSOR: They're a good gift. That's the dirty, little secret about all these Parker Brothers, Hasbro. Maybe it's not dirty. It's the reality. The majority of board games, these super, mega, ultra perennial favorites are not bought by the people who are playing them. They are bought by somebody else or given as a gift. Which means you can expect that they usually sell most of the copies at Christmas. And people might get them for their birthdays. And that's the difference between a game that shows up in Target and Wal-Mart and a game that shows up in a speciality board game store. You walk into a specialty board game store, which is probably under threat by companies like Amazon, and you're buying a game for yourself. You're buying special dice because your role playing game, game player or miniatures. And right next to that are the board games. And you say, hey, this will be fun for my next gathering, either at some friend's house or at my own place, or something like that. I want to play this game. You buy a Monopoly and it's usually not for your own collection. It's usually for somebody else. So you give Monopoly to somebody else who already has it. And it's fine because you just gave them the Star Wars version and they didn't have that. That's how they're successful. I guess what I'm trying to say is not necessarily-- it doesn't necessarily have anything to do with how well the game is designed. There are a couple of nice things that Monopoly does. We talked about how it's relatively easy to learn. The fact that it's been around makes it very easy to teach. It's very friendly to people who are playing on some sort of interrupted schedule. But that's not the reason that it's a successful game. AUDIENCE: So I think a lot of it comes from like the snowball effect. Everyone knows about Monopoly. If we assume that's true, how do you then make a commercially successful board game? PROFESSOR: How much marketing money do you have? I mean, we've got Blokus here. That's probably one of the more successful recent entries. And there's a couple of ways to be able to get a large number of families to buy a board game. If you're living in Germany, there is a big conference called the Essen Spiel, I believe. I might be mangling the language here in the town of Essen. There are smaller conferences in places like Leipzig. But what they are-- in addition to the vendors bringing all of their old and the new board games saying like, this is what we have on sale. We have tons and tons and tons of families going there trying to decide what they're buying for Christmas. And in Germany, that is kind of like an annual tradition for families to buy a Christmas game. Built on top of that, there is this thing-- I'm going to mangle all the German in here. AUDIENCE: BoardGameGeek. PROFESSOR: The Spiel des Jahres. AUDIENCE: Spiel des Jahres. PROFESSOR: Oh. Yeah, OK. Basically, game of the year award that is awarded, I believe, at Essen, they definitely do a big thing. I'm not exactly sure how the timing lines up. But definitely, the game that happens to be awarded game of the year is going to be the biggest mover at a convention like-- convention is not the right word I am looking for. Its more like a trade fair. That's probably a good example. And just lots and lots of families will go there. They will buy a lot of the game of the year because a bunch of board game critics said, this is the best new game that came out this past year. And then, all that money gets circulated back into marketing. You see more ads. You see the awards sticker on the side of the boxes. More families buy it. And that just snowballs on top of each other, right? So winning sort of similar things in the US I believe there are-- AUDIENCE: BoardGameGeek. AUDIENCE: ABC News. PROFESSOR: BoardGameGeek. AUDIENCE: Yeah, they have their own con in Texas. PROFESSOR: Yeah. But not that many families. I mean, I'm thinking more like ABC News doing a top 10 hot toys for this Christmas kind of thing. That's the sort of thing that bumps you up from a game that you buy for yourself to a game you start buying for other people, which multiplies how many copies actually get sold. So that's one way. That's usually one way that it happens. That's not to say that a game that-- what's called an enthusiast board game. A game that's basically sold to people who are buying it for themselves can't be commercially successful. It just means that you need to make enough money that you made a profit on it. And a lot of these games-- I think of all the games that I've got today, let's see. This is probably-- let me see if this won any awards actually. This is probably the sort of game that you would buy for families. This is Set. And of course, missing it's original box. But you have played Set, right? OK. So this probably fits into the category of a game that somebody in the room has probably played once and can teach everybody else how to play. Who's played Blokus before today? Yep. OK. How many people played with your family? Before you came to-- how many of you played before you came to MIT? OK. High school? AUDIENCE: Middle school. PROFESSOR: Middle school? Yeah. So I think the reason for it-- this is Mattel. So they certainly know their marketing. But I think the reason why this got popular was again, one of those top 10 toys to buy that Christmas when it was released. And that's why Blokus really took off. It's a well-designed game. You don't get on those lists without being a well-designed game. But that's not the reason why these games are successful. That's kind of the bare minimum to even be considered to have designed a well-designed game. So if you're in this class thinking that you're going to make a lot of money as a board game designer, I know people in Hasbro I can get you in touch with. And I think they're probably going to say don't expect that. You might be able to have a decent living working in a large company like Hasbro on Mattel, which I think is the same company, yeah. But it's probably not because of the quality of the games that you're designing, but more about your ability to design a game based on s spec that was handed to you. Based on a requirement. It's like, we need a Star Wars Risk game by Christmas. Design it in two months, or something like that. And you execute that well and you will get a good career in a place like Hasbro. If you're designing for yourself, you should feel very happy if you manage to make your initial investment back. A lot of people who are sort of hobbyist game designers who make incredibly good games don't make back the money that they initially invested into it. And that's fine if you know-- if you went into that business knowing that. Because for some people, it's just good enough to be able to have a game with your name on it being played by a whole bunch of people who appreciate it. That's fine. More likely, a lot of people are going into careers, I'm assuming, that are not game design. Some of you are. But some of you will go into a career that might benefit from some of the design practices that you're going to learn in this class. Not just iterative design. Although, we've already talked about how that's relevant to a number of engineering disciplines. But also, things like you want to teach people a new system that you've designed, or a new system that's been put into practice. You want to create some sort of team building exercise. You want to communicate to someone how complex a certain system is that's relevant to you through your work and your discipline. There's a bunch of things that games can do to communicate ideas that might be relevant in whatever career you end up in. Even here in MIT, I'm the Creative Director of the MIT Game Lab. There's a bunch of different labs here in MIT who are using games for a variety of different purposes. Eyewire. Anyone heard of that one? Yep. AUDIENCE: I was in Sebastian Seung's Lab. PROFESSOR: Yeah. Over in the Picower Institute right across the street where the train tracks run under the building, they've got a game that's basically massively multiplayer, massively single player really, and you solve the puzzle which is kind of like paint... It's kind of like paint by numbers. You're clicking and you're coloring a field. What you're really doing is you're identifying neurons from scanning tunneling microscopes. These are neurons from a mouse retina. And you're basically doing all of the pattern recognition that computers have a tremendously difficult time to do, but humans actually are pretty good at. And then, you're accumulating all of that data. It's a fairly nice, very clicky Minesweeper-ish kind of feel to this game. It's like you're just clicking things so that you find-- they meet a nice contiguous space. It's a 3D game, So there's some sort of like mental hoops that your brain gets through. And kind of nice build, puzzle feeling. But what you're really doing is you're providing Sebastian Seung's lab with a ton of data about how a mouse's eye is wired, which is amazing. And I think they've got a couple of papers. So try it. It's real fun. Anyone here from systems, systems dynamics? No? Engineering. I their parent group is the AeroAstro department. [BACKGROUND NOISE] PROFESSOR: OK. Do you think that's a kid out there? AUDIENCE: I wonder if they're walking away or not. PROFESSOR: Just like a three-year-old or something, that's fine. But I hope it's not someone who's hurt. But a lot of system dynamics folks on campus, engineering systems design for instance, will use games to try to teach clients about certain engineering practices. So there's a group here that has a game called Space Tug-- AUDIENCE: Space Tug Skirmish? PROFESSOR: Yeah, skirmish. AUDIENCE: I've played that. My roommate worked there last summer. PROFESSOR: Yeah. At UROP? AUDIENCE: Yeah. PROFESSOR: And I think they're still looking for more UROPs because they've turned their board game into a digital game and they need to keep working on it. AUDIENCE: And it just [INAUDIBLE] three weeks ago and [INAUDIBLE]. PROFESSOR: Yeah. It's kind of got a-- I was going to say a Magic: The Gathering feel, but that's not really true anymore, the way how the game has evolved. But it's primarily a card game about building satellites, launching them into space. And then realizing that by the time you launch something into space, it's kind of obsolete. And the question is, how do you design a satellite so that it can adapt to changing circumstances? That's what they are teaching people from NASA, from the military about. And they decided that games are the way to do it. It started with a board game. They're going to digital game. It's a very large group here on campus called the Scheller Teacher Education Program. And they design games for K-12 to freshmen undergraduate-level science, math, language. How to better address some problems in curriculum, in sub-standard classroom curriculum. So they've done MMOs. They've done puzzle games. They've done a lot of stuff on Flash because it's very easy to get Flash running in a school. It's very much harder to get a whole school out to download a piece of software and install it across all the computers. All these places have UROPs. My lab has UROPs. So if you're interested in any of these things, you should definitely check them out. You can ask me for more leads. There are a lot of different. And finally, back to AeroAstro. There is a group that works with zero gravity satellites. What am I talking about? They're satellites. They work in zero gravity. AUDIENCE: SPHERES? PROFESSOR: SPHERES, yeah. These little compressed, I think, carbon dioxide propelled SPHERES. They look like gigantic dice, actually. And they spin around in space. And they'll run C code. And they have a high school programming competition, kind of like the FIRST robotics tournaments. Think of that in zero gravity. So they have you accomplish things. It's a pretty cool high school education thing to connect high school kids to space, basically. I know they are looking for UROPs. So if you're interested in that, you should definitely talk to me about that. I'll give you the contact information. So I've been yammering for about an hour. Let's take a break. Stretch. I'll start distributing some of these games. So we have one more discussion topic before we go into game planning. Some people should probably distribute stuff up here as well. AUDIENCE: Where's the nearest bathroom? PROFESSOR: Bathrooms are out to the right. [INTERPOSING VOICES]
|
MIT_CMS608_Game_Design_Spring_2014
|
23_Changing_Rules_II.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We go into a reading today. So this is an interesting book that came out from something called the New Games Movement, which I have mentioned a few times in class but just as a reminder. I think it was mostly in the 60's. How many of you kind of got that vibe while you were reading this? But it's interesting because it's written in this almost Hemminway-ish kind of short little sentences-- short little declarative sentences about wellness, and happiness, and getting along with the community and stuff like that. But all of the principles, at least in the chapter in today's reading, aren't actually all that different from what I've been teaching all semester long regarding game design and play testing, if it's your job to actually make a game. What Bernie DeKoven and some of his other colleagues-- has anyone heard of the Whole Earth Catalog? Brant, was it? AUDIENCE: Brant, yeah. What was it? PROFESSOR: The Whole Earth Catalog. AUDIENCE: That was the thing with the parachute? PROFESSOR: Well, actually, that's a New Game Movement in particular. Yeah. It's kind of the same community of people who are just trying to find ways to-- not only to find ways for people to get along with each other, but also things like [INAUDIBLE] where you're taking military equipment and turning them into things that you play with and stuff like that. So today's chapter has a lot of stuff about how people who don't necessarily see themselves as game makers can use some of the practices that we've been talking about in this class to basically change the way they play games that they already play. And in the process of that create new games. The New Games Movement-- there's actually a New Games Book that is more about the kinds of games that you can play. It includes things like parachute. If you've ever done that game where there's two people back to back-- you're sitting back to back and then your job is to stand up, I think without using your hands. Is that right? Oh, you lock your arms. Yeah. Also that game where everyone gets into a tight circle, reaches in, grabs somebody else's hand and the whole job is to untangle it. You see that in schools a lot nowadays, as well as games like parachute. And all that came from this New Games Movement. Again, it's a reaction to wars in Vietnam and, I guess, Korea. Am I getting my chronology right? AUDIENCE: Yeah, Korea then 'Nam. PROFESSOR: Yeah, and just trying to say, how do we learn how to play with other people instead of fighting other people? But there are a couple of interesting ideas in here that I felt were kind of unique to today's reading that we haven't necessarily addressed in class. Did anything leap out in Changing the Game? Yeah? AUDIENCE: He talked a lot about borrowing rules. So taking something from another game and kind of putting in your own. And making it your own and getting you over the hump of having to make up something completely new. PROFESSOR: Yeah, and it's kind of funny because the example that you talked about-- it's kind of like borrowing the rule on the fly, like from a different variant of the same game. But I think the same principle can actually be applied to a much wider range of things like safety rules. Like say you're playing a game like tag or something like that, and you haven't necessarily mapped out the boundaries of your game, but the game that you've played with your friends before decided that one of the boundaries was going to be the edge of that road, right? Nobody can run past that road. That's a fairly sensible safety rule. It'd be easy enough to just borrow a rule from that and just say, just like that last game that we played, no one crosses that road for safety reasons. Yeah? AUDIENCE: Another one on this combat that's really, really confused as far as where it comes from, especially when your game isn't necessarily about the whole combat. Like nothing like just not having to worry about how the combat actually works. We borrowed a few different ones. Like we tried Risk, we tried [INAUDIBLE]. It's like, we don't actually want to spend time on developing a fair combat system, just trying stuff out. PROFESSOR: Sure. I mean, as game designers you can certainly sort of be inspired by what other people have done. And of course, again, you're designing a game meant for somebody else to play, where it's what Bernie DeKoven is writing about. It's about changing a rule that you're about to play or maybe even a possible play. [INAUDIBLE] AUDIENCE: I just found it interesting how [INAUDIBLE] rules that sort of help players play the game as it is right now. And rules that actually change the strategy that you have to [INAUDIBLE]. PROFESSOR: Right. So there are rules that clarify what it is that you can or you cannot do, right? And then there are rules which he explicitly states as the definition of a change game. It's one that requires you to come up with a new strategy. So things like that safety rule that I described earlier doesn't necessarily fit that. But it fulfills other purposes like being able to play well, giving you the feeling of wellness. I can't believe that line, but it's great. But then there's also rules like, we're going to change the game because it's not challenging given the way it's currently played. Maybe the current set of rules has a simple exploit that just leads to one person winning all the time. Maybe someone is just way more skilled at this game than somebody else or you're playing with a five-year-old, something that like. And you want to have some challenge, you want to keep some challenge for the kid. But if you just stick with the rules right now, the kid will be endlessly challenged and will not ever be able to win, and you're not being challenged at all. That's where you give yourself-- and he brings this up, what's that the thing you introduce into the game to be able to rebalance that? You should know it even if you didn't read it. If you're playing with somebody who is way less skilled than you, you give yourself a handicap. Yeah. Yeah, so handicapping is a really interesting concept regarding game balance. Because normally when we think about game balance, it's all about how do we think of the rules so that everybody is kind of equally skilled, everyone kind of has a fair chance of winning, a decent chance-- maybe not exactly an even chance of winning, but at least a fair chance of winning. But he's introducing the concept of handicapping as a way to be able to reintroduce challenge in the situation where players are not equally skilled because of physical reasons, because of expertise or familiarity with the game, that sort of thing. So let me see, is that it? Anything else that people remember from the reading? Again a lot of the stuff-- there is an interesting little bit that he talks about cheating as a thing that players should be trying to do if you are already in the mood to be able to then suggest rule changes to the rest of the community. And you should be finding the loopholes, right. You should be trying to figure out, given the current set of rules that you're playing right now, what are the problems with that set of rules and how do you improve on them? And the way how you figure out whether these rules are important, or well written, or problematic, is you try to break them. And then, if you break them, then you realize that, oh OK, that rule was really necessary in order for the game to play, but your attitude is, I want to continue playing with all of you. Then you say, all right, we're not going to do that anymore. I think it takes a tremendous amount of trust among players to be able to accommodate that. I don't know how many people actually play regularly in environments like that. AUDIENCE: Are you saying that you should be trying to cheat to show the rules [INAUDIBLE]? PROFESSOR: That you should be trying to find the exploits. You should be trying to figure out exactly how ironclad this rule actually is, right. Or whether this rule is a good idea in the first place. AUDIENCE: By breaking the rules? PROFESSOR: By breaking them. It's very, very strange. It's a very, very interesting suggestion. I can't recall of a situation when I've been in a play community, as he describes it, where there's enough trust to be able to let people get away with that. AUDIENCE: Honestly, I think it's just less interesting. I think it's more interesting-- I forget who said this-- this is like a quote that's like, creatively actually comes from constraints. I think it's a lot more interesting to actually try to find the broken strategy in a game as it is. PROFESSOR: OK, to find the degenerate strategies within the rules as supposed to figure out where the rules are broken. AUDIENCE: So some of them play the-- [INAUDIBLE] change the rules [INAUDIBLE]. Sometimes when playing games you're like, how do we-- but I feel like every time I've done those with other people, it's always been important to everyone that everyone knows what the rules are [INAUDIBLE]. PROFESSOR: And that's one thing that he very, very clearly states. It's like, you are cheating publicly. You are broadcasting the fact that you are cheating, which brings up the question, are you really cheating or are you just proposing a rule change, right? But the idea of treating the rules as something that could be maleable, I think, is what he's really getting at. He doesn't fight to use the word cheating, which I'm not entirely sure whether that's the right word to describe it. AUDIENCE: To me it seems sort of similar to the phase idea of [INAUDIBLE] what's underneath the game. So if you start breaking a rule and you're doing it publicly with the knowledge of other people, just to see what happens if we start to break this rule, I think you can get into some interesting issues. Like well, what makes the game tick? I don't know, I feel like a lot of games I used play as a kid, we would explicitly break rules. We would know of the rule of the game, but we would choose to either not play with it or to just limit it. This resource is limited. Like well, you know, this time when we play it it's not going to be. Let's just see what happens. PROFESSOR: Right. Yeah, and that's the attitude that he is trying to encourage. AUDIENCE: I feel like if you're like, for whatever purpose you're playing a game-- like if you're playing to win or if you're playing just for fun or something, players are too keen to change the rules when they don't do [INAUDIBLE]. PROFESSOR: Players aren't or players are? AUDIENCE: Players are. Like if you're just not very good at a game or you run into some sort of obstacle or strategy that [INAUDIBLE] done that. I think players, in general, jump to changing the rules too frequently. PROFESSOR: Too frequently? AUDIENCE: Yeah. PROFESSOR: OK, let's dig into that. Why too frequently? Why do you feel that happens too often? AUDIENCE: I'm thinking along the lines of competitive games here. You just get tired of how they have blame other people besides themselves. PROFESSOR: OK. AUDIENCE: Just in general. PROFESSOR: But what if you were trying to change the rules for other people? AUDIENCE: Yeah, no, I think that sort thing requires a really in depth understanding of all the strategy [INAUDIBLE]. PROFESSOR: Well, say I'm in a fighting game tournament or something like that where usually the rules are pretty rigidly enforced, right? AUDIENCE: Yeah. PROFESSOR: But I'm not like a top-level player. I just happen to be on a real winning streak among people that I know and I don't seem to be able to lose right now. So I think what Bernie is talking about is describing a situation where, especially if you are that person who seems to be on a winning streak, you might want to suggest a rule change to be able to bring the challenge back to you, right. I've seen players do this specifically in our kids where it's like, I'm going to pick the character that I'm crappiest at because I can't seem to be able to lose by playing the characters that I'm actually good at. That's [INAUDIBLE] very interesting for me. AUDIENCE: I mean, [INAUDIBLE] rule changes to [INAUDIBLE] I'm calling it cheating. PROFESSOR: Yeah. AUDIENCE: Because to me cheating implies that you are not. Me changing the rules and everyone agrees on the rules and how the rules change, and it's not. Then you have changed the rules. You are now playing by a different set of rules. If you play by those, you're not cheating because you're not playing by the original. I feel like cheating is anything where you are breaking the set of rules that was agreed upon. PROFESSOR: Well, cheating also, I think, has a certain assumption that everybody else is not breaking the rules, right. No one is playing with a different set of rules but you. All but the cheat. In fact, a lot of cheats only work if everyone else sticks to the rules as written and then you're the only one cheating. So again, I think he's trying to get across the attitude but used the wrong word for it by describing it as cheating AUDIENCE: You brought up the example of fighting games and picking a character that you're not very good at. That's not even a rule change, in my opinion. PROFESSOR: That's true. Well, yeah, I guess. I'm trying to think of the implicit-- maybe not a rule change, but it's more like a values change that you should be playing to win, right. AUDIENCE: You can make a rule that, OK, normally everyone can choose whatever character they want, but yours is always better than us so that when you play in our play group, I choose your character. PROFESSOR: Oh, yeah. You get to choose my character or something like that. You tell me who you want to play against and I'll try my best to play that game. Let's start in with Miguel and then over to Laura. Yeah? AUDIENCE: So, one thing I wanted to bring up was so you think if one person isn't being challenged enough then they can start experimenting. But in my experience it more often happens when the entire group is not being challenged anymore. In the sense of, you two just played Catan three times over the last few days. And she was kind of tired of Catan. You're kind of ready to do something else, but you don't have a different board game [INAUDIBLE] or something. PROFESSOR: So then make a variant, sure. AUDIENCE: --different variants up. And it's like everyone is trying to increase the challenge, not because it's not challenging. There's still [INAUDIBLE] who is going win and you all sit down and start playing. But it's not challenging in the sense that it's not engaging and not interesting. PROFESSOR: I think the reason why it's easier to do it when there's a lot of people finding a game uninteresting is because it's easier to get a consensus to be able to enforce a change of the rules, right. It's a lot harder to say-- say I'm 12 and I'm playing with a whole bunch of other 12-year-olds, and there happens to be a 9-year-old in the group and we're playing soccer or something like that. It's harder for anyone to convince the rest of the 12-year-olds to take it easy on a 9-year-old. Like, all right, what if we give an extra player on the 9-year-old team or something like that. Not to say it can't be done and I think that's what Bernie DeKoven is saying, no you should be able to do that sort of thing. And it kind of makes the game more interesting for everybody, not just for the person who is having a hard time. It makes the game more interesting even for the people who have an advantage because now it makes it a big-- more of a fight, I guess, for them to be able to win. Laura? AUDIENCE: Well, I think the other issue I have with the word cheating in describing changing the rules-- and less so against for individual players-- I know, for example, there are games that I've played since I was really little where it wasn't really intentional that we were not playing by the original rules. It's just that when I was seven and I learned how to play Monopoly or Clue or some really generic board game that I learned when I was really little. I did not sit through all the rules or I thought I knew the rules and didn't really remember them. And so you had a change in the rules unintentionally. And I think that's a very different category. PROFESSOR: Yeah, house rules. AUDIENCE: Your house rules become so much different. Even if you play with other people and you're kind of all playing by your own house rules, it doesn't really feel like-- I wouldn't always call that cheating. It's not because you're not trying to play by the correct-- what you think are the correct rules. PROFESSOR: And more importantly, everybody in the room's agreed to that set of-- this new set of rules, right. So cheating is again not a great word to be able to describe that. AUDIENCE: I feel like when you're modifying rules there's a few different ways to do it. Sometimes you just add a little to a game. You're just like, it's not interesting enough, make up some initial variants to make it sort more [INAUDIBLE]. And other times if you're doing it it's [INAUDIBLE] Catan with friends there, we [INAUDIBLE] because it's really overpowered if you just-- it's really overpowered strategy [INAUDIBLE]. Some groups a lot more [INAUDIBLE] overpowered strategy [INAUDIBLE] after playing twice we've [INAUDIBLE] one beater in there [INAUDIBLE] because he's ridiculously over powered. And so [INAUDIBLE] different levels of [INAUDIBLE] They see things and there's experimenting [INAUDIBLE] PROFESSOR: I think in the end, Bernie DeKoven's kind of coming from, you should be free to challenge authority. That's kind of like the implicit thing that you get from the whole catalog as well. AUDIENCE: Going back to what Nathan said, people are sometimes too keen. For example, something might seem initially overpowered but then you play it, you might be after the first game, everybody's just like, oh yeah, it's broken, might as well just change it, right. Whereas, if you'd played another game, you might have realized, oh wait, you can actually counter this by just doing X. PROFESSOR: So actually a really interesting counterpoint, which is not part of our reading is Dave Sirlin who is a MIT alum, actually, who wrote a book called Playing to Win. This is not part of our class reading but he will definitely back that up, right. His is like, no it's everyone's responsibility to play as hard as they can because you have an-- especially for certain kinds of games-- you haven't discovered everything that your game systems capable of. And if somebody can beat you playing that same game with the same set of rules, then clearly it's possible to beat that player or at least get up to the point where they are equally skilled. So now there's a whole bunch of game design philosophy to come out of that as well. And I believe we played-- did we play Yomi in this class before? AUDIENCE: We used to. PROFESSOR: We used to? Yeah, OK. There's actually an iPad version of it now. It's definitely not an iPhone version because it's way too much text to fit on an iPhone screen. But Yomi is a game that-- it's a card game version of basically a Street Fighter type fighting game. And it's all about anticipating what the other player is going to do and if you manage to get it right, you can build up your combos and everything. But the whole idea of everything that Davie Sirlin's worked on, I believe he did a remix version of a Street Fighter, which was [INAUDIBLE]. Yeah, the HD remix versions. The whole idea of everything that he wrote about it and everything that he makes is that these kinds of competitive games really should be more about, how do you challenge yourself to improve better? And it's a lot less about the community around you. It's a lot more about you and the game system. So you are submitting yourself willingly to the authority of the game system, which of course, game designers create. And then if there's anything that you should be challenging, you should be challenging your ability to be able to rise to the occasion, right. AUDIENCE: Everyone should be doing [INAUDIBLE], right. If that person's arguing that, they're not giving you a good experience. They're letting you down. PROFESSOR: Right. They're not fighting as hard as they can. And even Bernie describes this, right. The whole idea is what happens if you give somebody a chance to-- it's like, I could have played this game optimally because I know this game so well and have beaten you into the ground. But I'm just going to-- instead of handicapping myself, which is a sort of public declaration that this is how I'm going to be doing it, I'm just not going to play optimally. Then I think both Dave Sirlin and Bernie DeKoven are actually in agreement. This is not a good way to play. Because the person who is getting the benefit of that feels cheated of experience as well. Did I see? AUDIENCE: I think I need to clarify that oftentimes you can use [INAUDIBLE]. So in a game with really defined strategies, you could sort of be going for some strategy that you believe to be sort of optimal because you wanted [INAUDIBLE] other players that you were like-- and it could be, be and you sort of are [INAUDIBLE] along the lines of, oh you [INAUDIBLE] possibilities of the game in there. So you can just try hard and you can do that. I think you're actually working toward that purpose if you do something like-- you're not going to try a strategy that's totally bad, but who knows. PROFESSOR: You know, I do that a lot and I always feel compelled to publicly declare that this is what I'm doing. I think because I don't want anyone to think that I'm stupid. I really do something bad [INAUDIBLE] AUDIENCE: A lot of times when I play a board game, I play with people were really, really good at understanding and making board games, too. If I think that something might be an interesting way to play that game, I'll say it out and then it become more of a cooperative, how could this strategy be-- PROFESSOR: Let's play this out. Let's see what happens, right. A learning experience for everyone. AUDIENCE: Well, what would you have done from my shoes? It's like at this point we're not even playing to win, we're playing to develop a strategy. PROFESSOR: Yeah. AUDIENCE: Yeah, I was thinking [INAUDIBLE] you're not necessarily playing to maximize your fun right now. You're playing to explore [INAUDIBLE] the game has. And maybe find a bigger strategical future later on, I guess. PROFESSOR: And I think that's in keeping with what Bernie has tried to write about. He ends up the chapter by talking about how you are scoring, right. How you're deciding what a win is. And in this particular case, it's a situation that we're describing, a win is learning how a new, untested strategy is going to play out. It becomes less about who is the individual who's actually winning and more about, are we getting something new from our understanding of this game? Which can be really entertaining. Whereas I think Dave Sirlin, not quite so willing to let that go. He's a little bit more adamant that, no you should be winning based on the definition of the rule set, not based on something like extra community arrived position. And Bernie DeKoven is trying to fight for, no community determines when the conditions are fine. In fact, there are many good reasons for you to go for that. Different points of view. I'm not saying that anyone is [INAUDIBLE] about that. AUDIENCE: I mean, I think if you want to maximize-- like in the long term-- your chances of winning has to be [INAUDIBLE]. PROFESSOR: Oh yes, absolutely. Yeah, I will agree. If you're a professional player who is in a team, you are probably going to be working with your teammates a lot of times-- not to determine who is better at the game, but to try to make everybody else play better, right? And there's just two different perspectives on how you arrive at that. One is to always play as hard as you can so that everyone has to sort of rise to the occasion. And the other way is just-- no, it's OK to do things like experiment. It's OK to do things like handicaps. It's OK to do things like-- I'm just going to play-- I was just playing a game recently where like I'm going to do a sub-optimal strategy on purpose, just to see whether my opponent's capable of fighting it off. Because it's just ridiculous. So yeah-- I had fun even though I didn't win. So today I was thinking of going through three games. But what time is it now? AUDIENCE: 3:00 PROFESSOR: It's almost 3:00? OK, so I think we're going to do two games. I'm going to very quickly have a discussion about what are all the different versions of Mafia that people have played. How many of you have played a version of Mafia or Werewolf or something like that? No? Everybody? OK. All right, let me quickly give a description. The general framework of these games everybody gets a card, and the card gives you a role. These are usually just regular playing cards where-- one version that I have played is that if you have a king or queen or a jack of clubs, you are mafia. And it's your job to kill everybody else who didn't receive a king, queen, or jack of clubs. And then everybody else is a villager whose job is to figure out who are the mafia players. So everyone closes their eyes and open their eyes based on the cues of a person who is like a moderator. If I ran the game here in class, I would be the moderator. And I'd say like, nighttime, everyone closes their eyes. Mafia awakes. And then the mafia do not say anything, but they can communicate with things like gestures and moving their head. And they're basically sort of using body language to vote on who dies that night-- usually by nodding, or by pointing, or something like that. And then everyone closes their eyes. The moderator declares who dies. Everyone opens their eyes again, and that person reveals their card. And that shows whether they were mafia or not. And the whole idea is that in between rounds everyone speaks freely. And so there are accusations that are flying around, and suspicions, and exchanging of very, very tangential evidence-- like, I thought I heard rustling over there. You know, and it's like-- you know, that could be the mafia person just trying to throw you off the track. That could be somebody who is actually the villager. Only the mafia know for sure who is who. And then there are other hidden roles. So what are some other variants that you've played? Yeah? [INAUDIBLE] AUDIENCE: So I've played where the card doesn't get revealed when you die. PROFESSOR: Oh, yeah? AUDIENCE: And one particular other thing that-- like all sorts of different roles, as well. And the voting mechanic for who gets killed-- so I've seen it played where people decide when the discussion is over. And then they little vote. And then the person who has the majority gets killed. PROFESSOR: They vote by printing? AUDIENCE: Yeah. PROFESSOR: Yeah. AUDIENCE: But the way that I always play is that all it takes for someone on the block is to have someone say, I'm putting that person on the block, and another persons says "seconded". And at that point, until a vote happens, you can't put anyone else on the block. PROFESSOR: Oh, interesting. AUDIENCE: So either everyone will vote and will say, OK, this person-- and a majority will result in a kill, at which point it's night instantly-- and that's that day. Or they fail to kill that person, and then people can feel free to like put up more people-- like to just point out another person which they like-- no, I'm like [INAUDIBLE]. PROFESSOR: You had your hand up. AUDIENCE: The two most basic roles, in addition to like mafia and villager-- that I've seen in pretty much every game-- is the medic and the detective. The medic, every night, gets to point at someone, and if the mafia were trying to hit that person, that person stays alive. The detective, every night, gets to basically point at someone while everyone else is sleeping. And the moderator will either nod his head saying, yes, he's mafia or no he's not. AUDIENCE: So I have sort of a variant of Mafia that's at MIT, mostly [INAUDIBLE] called Live Action Mafia. And the way this works is one day is one day-- so like often there's [INAUDIBLE] where like mafia lasts one [INAUDIBLE]. But in this version, the mafia can kill someone once per day by tapping them on the shoulder and saying bang. And essentially, because you obviously-- like in this game, you couldn't do this. If you're mafia, you couldn't kill someone in front of other Mafia players. Because then they will see you kill that person, and will tell everyone that you killed that person. And then you'll be lynched for it. It's an [INAUDIBLE] game of-- often times you try to figure it as like-- people like [INAUDIBLE] so they get friends to lie for them about where they were in that location there. In one of the games recently there, someone said, oh, I was working on my lab until like 4:30 or something. And then someone found out who the partner was in that lab, emailed the partner, and the partner said they got out at 4:00. And this kill was at 4:15. That person was lynched. And so there are a number of special rules and powers, but like [INAUDIBLE] the game, where the focus is on actually like figuring where people were like [INAUDIBLE]. PROFESSOR: One thing that I think-- one reason why that works well on the MIT campus is because it's a fairly small campus compared to other university campuses, even in Massachusetts. And even when you're off campus, you are often in close proximity to a lot of other MIT people. Do people know who else is playing in this game-- in a round of Live Action Mafia? Do you know who [INAUDIBLE]? AUDIENCE: Occasionally there are some-- there are like [INAUDIBLE]. Like most of it's random. There's people in campus. There's always people playing at-- there's a fraternity across the river in Brookline. They're [INAUDIBLE]. And so it's when a kill happens there, and so immediately everyone's trying to figure out who was on the MIT campus. [INAUDIBLE] people are trying figure out who couldn't have gotten back there in time. PROFESSOR: Right. That's nice because it helps you do the detective work. AUDIENCE: I used to play, but I'm not friends with like a lot of people who are in the game. I just had a couple of friends going who always invited me to play. And so I usually get lynched pretty quickly, because because they'd just be like, we don't know this kid. So long. AUDIENCE: My biggest problem with Mafia is like even just the like the table [INAUDIBLE] like there's like no-- AUDIENCE: [INAUDIBLE] Mafia, if you don't know-- AUDIENCE: Yeah. AUDIENCE: And so like what will end up happening based on the first round it like, John, you look suspicious. I think John's mafia. Everyone vote yes. And then everyone votes yes, and he dies. And he's sitting there like, I'm sorry, what? PROFESSOR: Yeah. That's-- AUDIENCE: That's kind of how the game [INAUDIBLE] PROFESSOR: Sorry? AUDIENCE: I had more. AUDIENCE: Oh, sorry. AUDIENCE: I was going to say that a couple of interesting variants I've played with are like where you have things like the fool, where his job is to try and get killed as Mafia, or an innocent, where if the town accidentally kills them, then they just like automatically lose, and things like that. AUDIENCE: I think the most confusing variant is the [INAUDIBLE] where they have the mafia, the medic, the detective-- then they also have, I think, the serial killer, who is supposed to be the last person standing, and then the arsonist-- I can't remember his goal. Then there was a transporter who could switch targets. You know, like if the mafia picks one person, and the transporter hasn't been picked, that other person can switch the mafia's turn. And then there was a seer who can talk to dead people. PROFESSOR: Who specifies the roles? Is it just like a stable set of rules on an online server? Or is it like every round has this set of check boxes? AUDIENCE: I think that how it works is that you choose what type of game that you want. PROFESSOR: Oh, I see. So you're setting up a little lobby for this-- OK. I see. OK. AUDIENCE: One of the funny ones that I played-- like there's one that was called Epic Mafia. It's actually like imbalance, but it's still fun. It's a crazy cop, who always-- so there's basically three cops and a mafia. And the cops don't know which cop they are. There's a crazy cop who always gets everything inverted. There's a regular cop. And then there's a cop who like always sees everything as truthful, or always sees everything innocent or everything as not-- I don't remember which. It's actually really unfair for the mafia [INAUDIBLE] where you just like say, this is what I got. And then someone dies, and then you say this is what I got. And then, basically, you can just piece together who is lying. But it's interesting. PROFESSOR: [INAUDIBLE] AUDIENCE: Yeah, I was going to say. [INAUDIBLE] the game before that. There's a game where someone will swap God with one of the players. And decided to allow this to proceed. AUDIENCE: Wait, hold on. AUDIENCE: Did God [INAUDIBLE]? AUDIENCE: [INAUDIBLE] PROFESSOR: OK AUDIENCE: So this person role-swaps God with one of the players. PROFESSOR: Oh, OK. AUDIENCE: And so the God's role became a villager. The moderator became-- it was like the moderator's role was now a villager, and like one of the random villagers-- they were all just now God. PROFESSOR: OK. AUDIENCE: [INAUDIBLE] AUDIENCE: They're like-- Yes. And they're like this person who has now the new God made a bet with the player who was the head of the mafia that they could live until the end of game. And after it's money, like 10 different people won this thing. AUDIENCE: So the different aggregates of roles happen more or less-- not by accident, but it's not like someone-- it's not like there are standards. It's like you get together with the group that's playing, and then you decide what goals there are. Or someone is like, oh, I thought about this beforehand, and I have some set up I think would be really cool. So my brother once put together a set up like that, which had I believe 17 people-- three of which vanilla villagers with-- I'm not sure whether it was four or five different sides. PROFESSOR: Factions. AUDIENCE: So there's like four or five different factions. So they were just playing like-- there's like mafia, and then there's like werewolves, which are identical to the mafia, and also get a kill at night. And then there's the serial killer, who's bulletproof, so he can't get killed at night. And he was also a separate faction trying to kill everyone else. And then there's the vigilante, who's like a good guy who also gets a kill at night. And so this game, it was actually, surprisingly, very good. Because everyone basically had a role, so no one was unhappy about that. And everyone more or less tried-- and the game is quick, because even though it's 17 people, you've got like four or five people dying every round for the first couple of rounds. PROFESSOR: I'm trying to figure out why there are so many variants on Mafia. There's something about Mafia or Werewolf-- or any of the names that people use-- that seems to lend itself to people coming up with new variants and then adopting it. I mean, clearly it makes sense for-- you need to be able to have a large group of people agree on the roles before it starts. So I'm not so confused about that. I'm wondering why Mafia-- AUDIENCE: I think, one, it's really fast to play through, which makes it very easy to playtest. And then, two, it's also just really, really easy to extend. You could always make your own role, but you honestly don't need to. There's at least a hundred different roles. You could just like eenie-meenie-miney-mo and like aggregate into a game. PROFESSOR: All right. AUDIENCE: I was going to say that it's sort of boring for the villagers that they're just getting kicked out one by one. PROFESSOR: So there's a flaw in the game that every one of these variants is trying to address, which is making things a little bit more interesting for the poor villagers. And that's, I think-- the fact that it's very clear that all the rules are just imposed by the people in the room-- definitely that's a Mafia tournament as far as I know. I guess we could have Mafia tournaments spring up in the middle of conventions and stuff like that. But they don't seem to last very long. It's hard to imagine a version of Mafia where you are in direct contact with the person who established the rules for the round that you're playing right now. So it's very easy to just think that there's no canonical set of rules. AUDIENCE: I was just reminded of some things. There was one time where there was a game of Mafia where God failed to give out any Mafia cards. PROFESSOR: Oh. AUDIENCE: Like, he cheated, as far as that goes. And so god chose someone to kill every night, and pretended that there was a mafia doing. And so the group started as probably around 12 people. And at around five people, they were like, OK, this is impossible-- as in, the previous [INAUDIBLE] they were like, OK, if this happens then blah. And if this happens, then blah. And then they killed someone, and that person was innocent, even though they had to be guilty in order for it to make sense. But they were like, what's going on? PROFESSOR: I don't know. I think that's the sort of cheating that we'll need to go to actually describing. Maybe we should try that once in a while. AUDIENCE: You know, they were like pretty mad at the person. PROFESSOR: I can imagine. I can imagine. Yeah, yeah. Maybe that's the counter argument. AUDIENCE: You are never god again. AUDIENCE: So I don't actually know what the game is called, but we call it spies. It's basically there's a certain number of spies, and a certain number of resistance players. It's very similar to Mafia. AUDIENCE: Resistance. AUDIENCE: Oh, Resistance? OK. PROFESSOR: So this is a Mafia type game, yeah. AUDIENCE: It's called Spies. It's like the opposite. It's very similar to Mafia, except that you have to have a certain number of missions by putting people on missions. And the spies try to fail them, and the resistance is trying to pass them. And we did something very similar to that, where we had-- one of our friends left the room for a minute. And we go, OK guys, next round we're all spies. And we came up with like pecking order for who does what at what time. And we all tried to like blame one kid for being the spy the entire time. Then, at the end, we finally managed to get to the point where everyone had voted for the only person that was not the spy to be the spy. We basically just messed the game up to try and like execute him and have fun with it. PROFESSOR: Right. AUDIENCE: But it kind of relates to that point of taking the rules and-- PROFESSOR: And Resistance is a little bit easier to identify that there is actually a set of rules. Because Resistance is a boxed product with a rulebook in it-- unlike a lot of other games of Mafia, where you kind of think things are pretty malleable. I do want to move on to something else. But before I get to Lockup, a couple of questions. If you have a smartphone, try downloading this. I think it's free on both the Android and the app store, because we're going to play a couple of rounds of this-- if the Wi-Fi works in this room. Which it may not. AUDIENCE: I really like Resistance, because there's actual public information which you need. It's not just, John's guilty. He's dead now. Right? PROFESSOR: You don't just start the game by making stuff up. AUDIENCE: Yeah. PROFESSOR: Now, anybody who doesn't have a smartphone, you're welcome to use mine. Who would like to? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, go ahead. AUDIENCE: But yes, I have a chart of Resistance games. AUDIENCE: Wait, am I the only one who has [INAUDIBLE]? PROFESSOR: Yeah. AUDIENCE: You could play [INAUDIBLE]. AUDIENCE: You can [INAUDIBLE] PROFESSOR: Although, actually, I'm going to get my iPad. I'll [INAUDIBLE]. [SIDE CONVERSATION] PROFESSOR: OK. Yeah, SpaceTeam is a game that I want to give a try. And we'll see if it works. If you are surrounded with people who have like the same kind of phone as yours-- like Android and iOS are two separate things-- you can attempt to do it over Google. AUDIENCE: There's some way you can play them-- AUDIENCE: I'm not on the Wi-Fi right now. AUDIENCE: You can play Android and iPhone together. AUDIENCE: Over Wi-Fi. If Wi-Fi works in this room. AUDIENCE: I have Wi-Fi. [SIDE CONVERSATION]
|
MIT_CMS608_Game_Design_Spring_2014
|
20_Cooperative_Games.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, so this weekend's reading was met because be giving a YouTube presentation. Actually it was a Google presentation that was recorded onto YouTube. And he's the designer of Pandemic, he also made the game Forbidden Island, and I think he also did-- Desert. I'm assuming that he did Forbidden Desert. That's my guess because they're not that different of games. I mean, these are cooperative games, for the most part. How many of you have played one of these games? Forbidden Island, Pandemic-- Pandemic the board game, specifically. Yeah, OK. Pandemic the computer game is a completely different game. Huh? AUDIENCE: Pandemic Two, the online game? PROFESSOR: Yeah. The one where you shut off Madagascar. AUDIENCE: Yeah, yeah, yeah. PROFESSOR: OK, that one is a different game. It's same title, same concept, right. I mean, it sort of made sense that they would have [INAUDIBLE]. I mean, there's probably more of a similarity of civilization of board game than civilization of computer games. Although the civilization that you play as a board game nowadays was not the same civilization the board game that existed when the computer game [INAUDIBLE]. AUDIENCE: [INAUDIBLE] PROFESSOR: The computer game aside, you can play the-- how many of you basically played a game where it just turned into one person telling you to do everything? That's my [INAUDIBLE]. How many of you had the same thing happen with either Forbidden Island or Forbidden Desert? It happened with you two? Or were you the one telling people [INAUDIBLE]? [LAUGHER] PROFESSOR: So it seems-- actually, how many people play Forbidden Island or Desert? OK, only one. OK. AUDIENCE: So 100%. PROFESSOR: So maybe it happens a lot more often. I don't know. If find it happens a little bit less often with Island, but it happens to me every time I play Pandemic. On one hand I'm usually the one receiving orders, and I don't mind because I'm not that invested in it. But on the other hand, I'm not that invested in it. It kind of like takes me out. When someone starts telling me, this is what you should do, clearly this is the optimal move. This is the thing that you should do. It's like, that's fine. I'll do it. I don't care about this game anymore because I don't have any [INAUDIBLE]. But I don't know. Yeah? AUDIENCE: So in my experience it's always been each of the players is doing that. PROFESSOR: Everybody else is telling you to do something? AUDIENCE: Every single person is telling every single other person exactly what to do, and every person ends up making a decision for themselves. Just at the end being like, OK guys, here's what I'm actually doing because I think that [INAUDIBLE]. PROFESSOR: That sounds reasonably interactive and you still have a decision to make anyway. AUDIENCE: It's fun because you actually get to like-- people actually listen-- PROFESSOR: Yeah. AUDIENCE: --on one hand because when you bring up a good point they'll be like, oh yeah, that's a good point. But on the other hand, no one-- it's not one person playing the game. PROFESSOR: It's like a board of directors giving you advice, and then you've got to figure out what's the right thing to do. AUDIENCE: Except as you're done you start giving advice. PROFESSOR: And everyone switches roles. AUDIENCE: I don't know if you're saying it's a negative, necessarily. PROFESSOR: I'm not necessarily saying that. AUDIENCE: But I feel like that might have very well been what he was going for. Like in the video he was all talking about his wife, and like, he wanted to be able to play with his wife, right? So this is kind of a serious gamer can play it with someone that's not as invested is kind of what you're saying a little bit. PROFESSOR: Yeah, I mean, as a result I've played many games of Pandemic, which is a good thing. Yeah? AUDIENCE: Sometimes I think, like, [INAUDIBLE] PROFESSOR: Right. Like deliberately sabotaging. AUDIENCE: Yeah. Also, there's a game that [INAUDIBLE] being play tested basically. Called [INAUDIBLE], it's like, really pretty tactical. There's like a [INAUDIBLE] strategic components and you have teams of players [INAUDIBLE], but there's huge-- there are large player [INAUDIBLE], but at the same time it's really hard for one player to go help someone else on their team and do everything else components of two minutes for your turn. And so it happens [INAUDIBLE] someone is, where like, someone steps in and tries to help someone else, and they end up screwing it up even worse because they didn't realize that they [INAUDIBLE] everything that was going on and didn't have the time to analyze it properly. PROFESSOR: I think that's like an example of what you're describing as well as like here. In the realm of [INAUDIBLE] co-op games or games with a very strong [INAUDIBLE] component, you got-- there is an assumption, at least in games like Pandemic, that it's not so much that something is out there strategizing to beat you and you're trying to over come it-- like a different player, for instance, who is trying to achieve victory at your cost. It's more about, can you get your affairs and your communication and your decision making lined up [INAUDIBLE] so that you can actually overcome the odds that the designer has set up for you? So I think in the case of the [INAUDIBLE] games, there's actually somebody out there who's actively playing the game who's actively trying to win at the expense of your victory. So one way that I certainly see is the [? timing, ?] right is it's OK to cooperate as much as you want in a minute. And that just leads to, like, confusion and bad judgment calls and sometimes hilarity and [INAUDIBLE] Space Alert, which is maybe a game which you should [INAUDIBLE] because it's-- for those people who don't remember me describing it in previous classes, you're all members of a starship. You all pretty much aren't sure what you're supposed to be doing. You're playing the game in real time because events happen as you play an audio track that just runs nonstop. It never pauses until the game is over. And you all have to be very, very coordinated to get anything done. Just like firing a single gun requires you to charge up that gun, turning on a shield so that you can block enemy fire needs to be on at the moment the enemy fire is about to hit you, I think-- as I recall [INAUDIBLE]. But [INAUDIBLE] it just [INAUDIBLE]. AUDIENCE: [INAUDIBLE] and also because there's so many threats [INAUDIBLE] it's very easy to, like, accidentally run out of energy or to do things where they [INAUDIBLE] or miss if you don't [INAUDIBLE] one of the [INAUDIBLE] things is obviously, like, [INAUDIBLE] fires their gun at like some threat, but they fired it a turn before it appears [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: Which is like [INAUDIBLE]. PROFESSOR: Yeah, because [INAUDIBLE] has a range, if I recall, right? An effective range. So, you're all trying to achieve simultaneous victory. You're all trying to not die in that game. But because you have so little time to be able to figure out what the right thing is to do, you just [INAUDIBLE]. That's not the case, I believe, in Desert Island and Pandemic. You have about as much time as you like to make a group decision. But it's fairly [INAUDIBLE] these sort of competing cooperative games, where your only opponent is the game system the designers really created for you. You can of course set difficulty levels in a lot of these games. If you're learning it for the first time, you play it on kind of the easy set up. And if you're [INAUDIBLE] with all the mechanics of the game, you can sort of crank up the difficulty by starting the game at a point of [INAUDIBLE]. But I think-- anything else that anyone noticed from his thoughts about the design of cooperative games? AUDIENCE: You mentioned-- I think it might have been in the video or something else [INAUDIBLE] the rule that said that you couldn't show anybody else-- PROFESSOR: Your hand? AUDIENCE: Your hand. PROFESSOR: Yeah. AUDIENCE: Because that was a very important component of [INAUDIBLE]. PROFESSOR: Yeah, yeah. AUDIENCE: I didn't understand the [INAUDIBLE]. PROFESSOR: It's kind of giving you still something to do. Because you have information that no one else has until you share it, there's still the thing that you can do, which is share the information that you got, even though it's not necessarily information that you probably-- in a game like "Pandemic," it's not information you want to hide. You want everyone else to know this information So there's still a decision to be made even though there's activity to be done. But in a game where time is limited, then yes, there becomes a decision of, do I want to share this piece of information? Because it's taking away time from doing something else. I worked on a game with the Education Arcade, which is a research group here based in [INAUDIBLE]. And a long time ago, we made a game where it was a location based game. So everyone's walking around with GPS [INAUDIBLE] smart devices. And everyone is trying to investigate the ground water contamination, basically, but everyone also was one of three different classes and they would get different information based on which class they were. Some of them were, like, [INAUDIBLE] some of them were more-- I'm trying to remember what-- some of them were biologists. Some of them were geologists. I think one of one was, like, construction engineer or something like that. So you knew things about how things were built but you didn't necessarily know much about what type of animals will be affected by [INAUDIBLE] And so in the game, information that you got were private, but it was the kind of thing that you want to share. Now, in a location based game, sharing information becomes tough because everyone is physically spread out over an area. This was a game that was played over the entire MIT campus. Everyone had walkie talkies. But the nice thing about walkie talkies is that only one person can talk at a time. Otherwise, no one really can make sense of everything. And it's still kind of the case as you try to play the game with cell phones because you can only talk to one person at a time unless you, like, set up some crazy network. Even if you try to do text messaging, that's [INAUDIBLE], right? So that's one way that you can get people involved and feel like they're contributing, which is just, give everybody little fragments of information that they need and then get them to-- and then the game becomes more about how do you go about sharing this information rather than what decision you make with the information. [INAUDIBLE] once you got all the information, there is a decision to be made. And in some cases, it feels like a board room where everyone's got different ideas and trying to encourage you to do it in certain ways. And that's great, because [INAUDIBLE] is weighing the pros and cons of every possible option. But in some cases, it's just, like, one person just decides to be the decider and is going to tell everybody else the decisions and then everyone else [INAUDIBLE]. Has anyone played, like, sort of cooperative live action-type games? [INAUDIBLE]? AUDIENCE: Oh, cooperative live action type games. I played [INAUDIBLE] where there was, like-- I mean, a lot of [INAUDIBLE] to do when you're [INAUDIBLE] make sure that-- you need to get everyone's [INAUDIBLE] because that will make people communicate and work [INAUDIBLE]. And oftentimes-- so, for [INAUDIBLE] like, I remember [INAUDIBLE] in particular, there was [INAUDIBLE] You would need, like, [INAUDIBLE] components, which you can [INAUDIBLE] on your own. You can gather the resources and stuff. And then you needed to successfully [INAUDIBLE] in the blank, Hangman or Pictionary or something that's describing [INAUDIBLE] And it was nice to have this cooperation [INAUDIBLE] cooperation is like, did someone want to, like, [INAUDIBLE] trying to describe [INAUDIBLE] face to someone. And it was also [INAUDIBLE] like, share a little bit about them. It also was interesting is that oftentimes, someone says, does anyone want to, like, when you're doing Pictionary, trying to describe the word destruction. It's like [INAUDIBLE] people talk about your research after that. PROFESSOR: Right, right. Of course, yeah. Yeah. So I think what you're describing is, like, the live action role playing [INAUDIBLE]. PROFESSOR: Yeah, live action [INAUDIBLE]. PROFESSOR: Yeah. But a lot of live action role playing games, even though you may be cooperative on that one activity, you don't know what the motive of every single person in that room is. Whereas in non-LARP type games but that are played out in live action-- say, training simulations, for instance-- often, everyone's actually on the same team. Everyone is very, very clear [INAUDIBLE] working toward a common goal, just given different roles. So, [INAUDIBLE] I trained a couple of years in the Army and we had a lot of live action training simulations where, yeah, everyone's on the same team and there are instructors out there trying to make your life difficult, sure. But everybody that you're actually working with is communicating and trying to do the best job that they can towards the common goal. Has anyone gone through anything like a sort of simulated exercise or anything-- either an evaluation or-- you know, like an emergency evacuation? AUDIENCE: Yeah, [INAUDIBLE] fire drills. PROFESSOR: We have fire drills. All right, that's a great insight. Everybody wants to get out of the building alive without falling all over, sure. And that's a great example, yes. AUDIENCE: [INAUDIBLE] in a lot of the, like, scenarios that [INAUDIBLE] you and your team [INAUDIBLE] even a company trying to design this product and you had all these challenges. And it was primarily intended to be cooperative [INAUDIBLE] an element of, like, [INAUDIBLE] other team [INAUDIBLE] PROFESSOR: Do you-- so you compete with the other teams for the-- do you feel like you're in competition with the other teams while you are basing your decisions? Or is it really just like right at the end when you present everything? AUDIENCE: Mostly right at the end, but there were certaion ones where, you know, you had like a shop and there were pieces you could get while you were waiting in line to get different pieces. Like, you were kind of looking at what all the other groups are doing. PROFESSOR: OK AUDIENCE: And part of it was because it was the same groups all semester. Even if there wasn't any [INAUDIBLE] competition between groups at first, but because over the entire semester, you were working with the same team. You kind of develop a [INAUDIBLE] PROFESSOR: Yes, yes, I've been in similar exercises, mostly in school as well. AUDIENCE: Could this go as simple as just playing Lava when you're, like, three years old and you can't thouch the ground, and you get from one end of the room to the other without ever touching the ground, jump pillar to pillar or something? PROFESSOR: Well, let me see. I guess [INAUDIBLE] cooperative, I guess. There's player versus gravity, pretty much. And yeah, I mean, it depends on whether your goal is to be the last person standing, in which it does become a competitive game. AUDIENCE: Sure, but it was one where you're just trying to get from one side of the room to the other, something like that. PROFESSOR: Yeah. AUDIENCE: And you're working together. PROFESSOR: Yeah, if you're all working together, I would say that is a fairly cooperative game and the rules are fairly simple. Just don't fall off pillows which are unstable. And that's very different from having different roles. Everybody has the same role. Everybody is just put in a situation where they can't accomplish much on their own but you can do quite a lot if they pull-- AUDIENCE: I always played with my sister, and we kind of had different roles, because I was three years older than her. When we were five years old, I could jump [INAUDIBLE] further than her, right? PROFESSOR: OK. AUDIENCE: [INAUDIBLE] put up different roles there a little bit. PROFESSOR: I mean, she also has a lower center of gravity, I'm suggesting then? AUDIENCE: Sure, she could also barely walk. PROFESSOR: So you're playing with your sister. So, it's kind of interesting. I don't actually like a lot of creativity competitions where they put teams against each other. I understand that it does get people enthusiastic and engaged and [INAUDIBLE] they want to be able to do the best. But it discourages a lot of information sharing, which could actually be better for learning in the process. So when we were [INAUDIBLE], for instance, a lot of [INAUDIBLE] run by colleges will give out awards. They'll quite simply have prizes for people who win maximum applause or something like that. I think we do give verbal [INAUDIBLE]. Has anybody ever done, like, [INAUDIBLE] awards? AUDIENCE: No, I've never done any awards. When We're doing [INAUDIBLE] the joy of engagement is being able to share [INAUDIBLE] and making sure people share throughout [INAUDIBLE] PROFESSOR: And those are usually the shout outs that give [INAUDIBLE]. These are the people who really helped as a team, the people who were clearly a benefit to more than just one project. We will do those kind of shout outs because in learning situations [INAUDIBLE] For competitive situations where you don't expect to have a lot of information sharing-- say, a school versus a school competition. It could be math. It could be sports. It could be-- we're working on one right now, which is a [INAUDIBLE] competition, a satellite programming thing. Some of you might have seen the email about that, where a lot of the schools that are working together are geographically separated by different countries, probably speaking different languages. So there may not be terribly much information sharing to begin with. So it makes sense that they won't be sharing information with people that are geographically close to them. So it's OK to sort of put them in competition with each other while expecting they're going to cooperate entirely within their own team. But there's a deep wrinkle to that one, which is when it gets to the final rounds, three schools are put together and one school is always geographically separated from the other one. And that gives students an experience of what it's like to actually work on the state, right? Because you are working with different companies which are writing code to all run on the same machine. But-- and all trying to accomplish different [INAUDIBLE] problems that they're given [INAUDIBLE] so difficult that one school-- one school could probably do a mediocre job on it, but you need the resources of three schools and all the kids in three schools to be able to solve this particular space problem. So you need to communicate and you need to share. And that's an interesting [INAUDIBLE] of the game where, again, it's all about what are the limits of communication? We talk about time. We talk about physical distance being a limit of communication. What about things like language? About things like time zones? All right, so any of the games that you are making, cooperative games, I think are also competitive in a way? AUDIENCE: [INAUDIBLE] the end goal is [INAUDIBLE] I mean-- PROFESSOR: Yeah, it's-- AUDIENCE: In the end, it's like a competitive thing, but to achieve the most you can, it would be be cooperative. PROFESSOR: OK, is this the base [INAUDIBLE] one? No. [INAUDIBLE] AUDIENCE: No. Well, yes. I was going to say, we're definitely cooperative. [INAUDIBLE] well, two teams, I guess. AUDIENCE: Two versus one [INAUDIBLE]. AUDIENCE: Yeah. PROFESSOR: Oh, it's two competitive teams. AUDIENCE: Yes. PROFESSOR: So you have cooperation within the team and competition between teams. And which game was the one that [INAUDIBLE] AUDIENCE: The ship captain one. PROFESSOR: The ship captain one, right, right, where [INAUDIBLE] trying to [INAUDIBLE] put all of this information together, right. So, is there anything from Matt's presentation that you might want to try out in a game? AUDIENCE: I really like the idea small leveling, kind of. So you said he had three-- he gave the flow chart, which I really like. PROFESSOR: Right, right, right. AUDIENCE: Right? So you want to make it balanced between players' skill and [INAUDIBLE]. And if there's too much [INAUDIBLE] players' skills, you have anxiety. And if there's too much player skill and too little [INAUDIBLE], you get boredom. So we have like a basic version that you can just start with. And then if you're a really good gamer, you might move onto the normal version or the advanced version really quickly. And first, I was like, oh, this is really complicated, right? Three versions of the game. Then, you play [INAUDIBLE] and you just add one [INAUDIBLE] to that. And that's [INAUDIBLE] changes [INAUDIBLE]. PROFESSOR: And that's another-- in Forbidden Island, I believe the only change in difficulty is that you start-- the way [INAUDIBLE] works is that you are on an island that's slowly sinking into the water. And [? there's a ?] water level marker. So by increasing difficulty, I believe you just start at a higher level of water. And more stuff is submerged at the beginning of the game. But the rules don't change. And if you're going to try to do something like that in your game, I would certainly encourage you to think of difficulty levels in that way. Don't give us, like, here are the simple set of rules for simple players and the advanced set of rules for players. Those don't change things like just the starting conditions so that I-- I wouldn't encourage you to change, like, individual variables in your game because that might be too complicated to get across. But things like setup-- that's, OK. Once you've got the whole game set up at the correct difficult level. Don't have to worry about the differences between setup for easy or setup for hard. Giving people extra options, for instance, by giving them extra cards or something like that could also be a good way to be able to [INAUDIBLE] change the level of difficulty throughout the game. That's a good idea. [INAUDIBLE] there's something I was talking about cooperative and competitive. He just talks a lot about what it was like to design his first board game. I believe he said that that was his first board game. [INAUDIBLE] his first published board game [INAUDIBLE]. AUDIENCE: First one was [INAUDIBLE] Fish and Chips. PROFESSOR: Oh, yeah. Oh right, Fish and Chips. That's right. AUDIENCE: Oh, wait. I just got that joke. Wow. PROFESSOR: So-- but anyone observe or hear anything interesting from that one? From the [INAUDIBLE] talking outside of the cooperative and competitive [INAUDIBLE] cooperative game design? AUDIENCE: [INAUDIBLE] do you just mean something interesting about the talk? PROFESSOR: Yeah, anything else that was interesting. I've been asking you questions specifically on his thoughts on cooperative games like [INAUDIBLE]. But that's only part of the talk. That's not his whole talk. So I was just wondering if anyone noticed anything interesting. AUDIENCE: I think [INAUDIBLE] he mentioned Ameritrash [INAUDIBLE] games. PROFESSOR: Ah, yeah. AUDIENCE: Something I haven't herad of. And then when I looked online, it seems to be either something that has made it [INAUDIBLE], sort of? PROFESSOR: It's one of those almost invented internet forum arguments. But it's one that-- I mean, it's like the [INAUDIBLE] of the [INAUDIBLE] in game design. That's another one where-- that's the sort of thing that you see now just to make people roll [INAUDIBLE] But back in the '90s, I guess, it was a bigger thing. It was like, are games about the systems or are games about the stories? The whole thing was just that and it was people sort of arguing past each other-- not actually waiting to talk to each other, just wanted to shout at each other about it. And Ameritrash and Euro games are one of those things purely, from the description of Ameritrash, you can sort of expect that the people who like Euro games gave it that term. AUDIENCE: [INAUDIBLE] argument, because the Ameritrash games were-- the big problem with the Ameritrash games were the Milton-Bradley Gamemakers games. I think they were called Gamemakers or Game Masters series in the mid '80s. Like [INAUDIBLE] There's another [INAUDIBLE] one. But games with a board and lots of little plastic pieces where there's a lot of, like, details given to the [INAUDIBLE] molding on the pieces rather than the system. Like, that's where it was thought the money was spent. Where a Euro game is, well, it's little wooden cubes, and not much theme going on. And so the system has more the amount of attention and detail [INAUDIBLE] PROFESSOR: There is a lower cost version that is along those same lines, because Ameritrash is a lot about presentation, right? This is just-- so, a lot of-- I tried to think. Munchkin? Anyone play Munchkin? Yeah, OK. So, you know, this card game about sort of dungeon crawling, being really bad role playing gamers. And it's all about the art. It's all about funny text that they've written. It's not necessarily about cool-looking pieces, but it's about cool looking cards that you can laugh with, you know, while you're playing the game. But a game [INAUDIBLE] itself, [INAUDIBLE] actually [INAUDIBLE] AUDIENCE: I might call Magic Ameritrash. PROFESSOR: And Magic-- the Gathering sort of trades [INAUDIBLE] same sort of thing as like [INAUDIBLE] layered complexity on top of complexity [INAUDIBLE] very little elegance to the [? system, ?] right? But every once in a while, they sort it out and then it becomes a pretty elegant system and they then keep adding things on top of it. That's because that's how their business model works. AUDIENCE: So I guess a big question about [INAUDIBLE]. How is Ameritrash different from the theme games like Monopoly [INAUDIBLE]? PROFESSOR: Not that different. It's a more derogatory term, but I think when it comes to the stuff that's really old like "Monopoly," there's a limited level of presentation they could possibly achieve with traditional manufacturing. And by the time the [INAUDIBLE] term Ameritrash had shown up, games like Monopoly and Clue and all that had already been shrunk down into budget versions. You know, it's like, you're no longer getting metal pieces. You're getting little plastic pieces that barely look like the dog or the hat they're supposed to be. Games like "Battleship" where it's like, the whole idea is that it's not really, like, the "Battleship" kit looks nice. It's just like this small part of [INAUDIBLE] manufacturing and people grew up playing Battleship. And those games are being sold on a completely different premise. Those games are primarily nostalgia games. Those games are games that you bought because you played them at one point in time. But I think Ameritrash was mostly used to brand a certain kind of game that you'll be buying because you want to play for the first time-- like "Munchkin" that you buy to play with a bunch of friends, often published by companies that either had a war gaming background or a role playing book kind of background like Steve Jackson, for instance. Because they're used-- actually I believe [INAUDIBLE]. Actually, I'm not so sure if this came up in class. This is something that I remember hearing recently. The idea of the game writer, you know-- what is [INAUDIBLE]? AUDIENCE: Yeah, Mack was talking about-- writer verses inventor versus designer. PROFESSOR: Right, so the idea is that games were things that you would write comes out from the role playing game industry, where their primary product was books. And sure, books included descriptions of systems, but [INAUDIBLE] prose is your tool to be able to get things across. [INAUDIBLE] to try to use prose on the things that prose is good at-- on things like creating atmosphere, giving you exclusive detail about the setting a whole world or characters, and things like that. Whereas the Euro games, the Euro games were originally designed for quite a different market from the role playing. Traditionally, role playing games are designed for enthusiasts. Euro games are designed for Christmas presents. Very specifically, they're designed to sell lots and lots and lots of copies at Christmas to families. So, not only had the systems have to be simple enough for kids to be able to learn, but also advanced enough for adults to be able to want to play them and give them away as gifts to other people. And so that's a lot of attention paid to, like, simple rules, very, very complex interactions. And I don't know what the push towards the simplicity of [INAUDIBLE], sort of the abstraction of, like, cubes and circles and maybe [INAUDIBLE] like the are, like, [INAUDIBLE] AUDIENCE: A lot of it is [INAUDIBLE] factory [INAUDIBLE] the one factory that makes it all. PROFESSOR: [INAUDIBLE] one? AUDIENCE: There's one factory in Germany that made most of them. They don't make them in China, those games. PROFESSOR: OK. So that's economy of scale there, I guess. [INAUDIBLE] easier to make than anything else? AUDIENCE: So, when it comes to-- like I said, Euro games are for families and Christmas presents and stuff? That feels kind of weird to me because I'd expect for children to care a lot about the presentation of the game. So, how would you reconcile that, I guess? PROFESSOR: I don't know much about that social dynamics of the Euro game family. Who makes the call of what game they're going to play, right? Is it the kids or is it the parents? I certainly grew up in a family where the parents would say, this is what we're playing now. We are playing "Scrabble" because mom likes "Scrabble" and mom will destroy all of us at "Scrabble". So I don't know if that's the same thing in European families. AUDIENCE: And so we had a professor come in who talked about this a couple years ago where one thing that happened, at least in Germany, is there is a conference called [INAUDIBLE] Fair? PROFESSOR: Trade Fair. AUDIENCE: Trade Fair. [INAUDIBLE] fair happened in Germany. Thousands upon thousands of people in Germany go to that fair to play the new games before they're published. And [INAUDIBLE] fairs, they give this spiel [INAUDIBLE] awards, the game of the year awards, which then if you get that award, they'll sell a [INAUDIBLE] PROFESSOR: Because every newspaper will cover it. AUDIENCE: All the newspapers are going to cover it. But those families were going to go there, play those games. The kids are going to be interested in those games. Then, when Christmas time comes around, the game they get is going to be the game that won the [INAUDIBLE] award, not necessarily the game they played at the convention. But that's the game that got the award. Everybody bought it that year. And the next year at flea markets, that game is all over the place [INAUDIBLE] So there's a little-- it's a very different kind of consumption economy going on over there. But here, it's a hobby market. Here, it's the [INAUDIBLE] stores. It's not necessarily family oriented at all or family [INAUDIBLE] included in a particular kind of family organization. PROFESSOR: So they're like toy [INAUDIBLE] in the US, then, like Tickle Me Elmo and Furby. AUDIENCE: Yeah, the toy fairs are-- yeah, in the US, they're inventors, products from China and again, one thing gets bought and one thing is sold and everybody wants that one thing every single year. PROFESSOR: But the majority of people who go to [INAUDIBLE] are, in fact, families-- tourists, you know. But a lot of them are probably tourists from other parts of Germany [INAUDIBLE], but also from Europe as a whole. But of course, there's a small number of people who are going to [INAUDIBLE] who are the [INAUDIBLE] the people who are going to decide what's [INAUDIBLE]. And they are looking at the families They're trying to figure out what is going to be that hot product this year so they can get new [? orders ?] in. [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: What time of year is it? PROFESSOR: I think it's August. AUDIENCE: Yeah, it's summer. [INAUDIBLE] [INTERPOSING VOICES] PROFESSOR: Yeah, so-- but I don't know the extent to which the manufacturers need to have lined up their pipelines before [INAUDIBLE] in order to be able to capitalize on the [INAUDIBLE] Because if you fail to miss your window-- if you get the award and you can't actually put [INAUDIBLE] product down there, then you can't make as much money as you [INAUDIBLE]. AUDIENCE: And the toy fairs in the US, are they mainly consumers going to it or the industry, the professionals, and they're sort of, like, rubbing shoulders? PROFESSOR: Press. Press go to toy fairs as well. AUDIENCE: Yes, press, industry, Walmart, basically. [INAUDIBLE] Walmart and a couple others. AUDIENCE: So it's like, you're an inventor. You're trying to get on Walmart's shelf. So that's why you go? PROFESSOR: Yeah. AUDIENCE: You're trying to find, the next hot toy for Christmas. And that's why they-- PROFESSOR: Well, I'm not so sure that the toy fairs in the US are that inventor friendly. AUDIENCE: Yeah, the inventors have already gotten things [? off ?] by somebody. So it's the publisher. It's the maker who has already had the stable of inventors they purchased the product form. And they're making it to this place to test it out, see [INAUDIBLE] AUDIENCE: But who are they testing it on? AUDIENCE: The press and the people who are going to sell it. But-- [INAUDIBLE] Exactly. AUDIENCE: Isn't [INAUDIBLE] backwards, though? Because it's the consumer that's going to be buying it, not the-- AUDIENCE: The consumer doesn't have a need for it until they find out about it, though. It's a very different process. The consumer is like, when you look at toys, just look at the US toy industry. The customer just isn't involved. AUDIENCE: It's just whatever the hype machine builds up? AUDIENCE: Yep, whichever hype machine has the most money behind it gets the product on the shelves. And these days, it's [INAUDIBLE] properties [INAUDIBLE] trans media, but like, most media properties. There's-- the game, the toy, the [INAUDIBLE], the comic, everything around it. AUDIENCE: Like a multi platform. AUDIENCE: Yeah, yeah. PROFESSOR: [INAUDIBLE] because you have multiple advertising campaigns pushing each one of these threads. But they're also sort of pushing the entire [INAUDIBLE]. AUDIENCE: If this is something of interest to you, go to a Walgreens or a CVS. Go to the toy area, and look at the toys that have really poorly designed games on the back of them just to kind of see, like, these are the things [INAUDIBLE]. Somebody made it. It didn't get much press. It didn't get that [INAUDIBLE] behind it. And just, these are [INAUDIBLE] what is the effort they put into that thing? For the most part, it's just packaging. The actual thing [? that's inside ?] of the package. PROFESSOR: I'm trying to think. Where is the big toy fair in the US? AUDIENCE: In New York. PROFESSOR: In New York f Do you know when it is, ish? The summer, I suppose? AUDIENCE: I want to say summer because [INAUDIBLE] Toys "R" Us. Well, I helped [INAUDIBLE] Toys "R" Us [INAUDIBLE] do this thing for a toy fair. I want to say but I could be wrong. PROFESSOR: Did you [INAUDIBLE]? AUDIENCE: I didn't actually go. I just shipped the product. PROFESSOR: OK, so, my [INAUDIBLE] You may see a [INAUDIBLE] collectors, in particular. AUDIENCE: My customer was, like, they made point of purchase displays. They made point of purchase displays for retail outlets and it was like a huge deal for them. PROFESSOR: OK, so this is where they're selling things to stores to sell things, not the actual product that [INAUDIBLE]. Be caring sure. [INAUDIBLE] AUDIENCE: February. PROFESSOR: February? Gee. AUDIENCE: Toy fair in New York-- 2015, February 14 [INAUDIBLE]. PROFESSOR: Oh good lord. February-- OK. I have a lot more to learn about the toy industry. AUDIENCE: So I guess it's Valentine's Day, after cards, then toys. PROFESSOR: The other thing that I thought was actually kind of interesting in the video that was how he was talking with the folks at [INAUDIBLE]. And he was describing about embodying the player, giving the player something that he can identify with either on the board or if all else fails, somewhere in the packaging, something that he can project himself into. I'm not entirely sure if that is sort of universally accepted across all publishers or even conventional wisdom across the board game industry, but it's certainly interesting to see it from the point of view of one publisher saying, this is something important about the games that we want to publish. And we got a little bit of this from the [INAUDIBLE] gaming [INAUDIBLE] the other day. But it's [INAUDIBLE] I tend to go to the publisher and try to say, this is a game that [? could be ?] interesting, definitely try to do some research on not just the kind of games that they make, but also how they present those games to people, right? You know, you can probably make a good game. Like, the people who are at those companies have seen enough good games to be able to see past anything you may have missed. But it might improve your pitch a little bit and say, we've taken into account that you like-- I've taken into account that you like to position your games, present your games a certain way. And this is how you could do it with the game that I'm pitching to you, even though what I'm showing right now, the prototype. They didn't have that. Just being able to be able to speak to that means [INAUDIBLE] you're aware of the company's strategy and how [INAUDIBLE]. So if you are talking to a publisher, which generally does [INAUDIBLE] you know, I don't know who publishes [INAUDIBLE] but someone does, because I [INAUDIBLE]. But to say that this particular set of game mechanics would be a great tie in for this particular property or something that you already have. Even though that may not be something that you want to do, it's probably good for a publisher to be able to realize that you understand [INAUDIBLE] particular section of the business that you're in. AUDIENCE: It's also some so Fantasy Flight, don't want you to have IP attached to your games. They're going to bring their IP to the games. So they want mechanics. They want rules. They want how it's going to play, but they don't care about--- the space travel game. You might sell for them. And they say, all right, it's a [INAUDIBLE] fantasy game, the IP [INAUDIBLE] with us here. PROFESSOR: Also, probably gives you an idea of the kind of deal and contract you can get out of them. Because if you say you want to retain absolute creative control on it and determine the art that's going to go on the box or something like that, you might even be able to get that contract from someone like Fantasy Flight, but it's something that might be negotiable for-- AUDIENCE: [INAUDIBLE] there you would talk to them about what Kickstarter campaign you're going to run. That's what they're going to do-- a Kickstarter campaign with the game that they're publishing with you. PROFESSOR: So in other words, before you go and do a pitch at the publisher, just don't go in with a [? generic ?] pitch. [INAUDIBLE] go into something like a convention. You know, there are games that just have different [INAUDIBLE], for instance. [INAUDIBLE] of EB Games. There's one that's going to be right here on campus. If you've got a prototype that you want to let people know about and make-- I guess we don't get many publishers here, but you might get-- AUDIENCE: That's probably going to change soon. PROFESSOR: Yeah, Gathering of Friends is one such group where you have inventors and publishers all sitting at a big table. You don't need to [INAUDIBLE] prepare something special for publishers [INAUDIBLE] because they show you to multiple publishers at once. [INAUDIBLE] actually going to submit it to Hasbro or something that. You might just want to say this might make a great Transformers game because parts connect to each other or object [INAUDIBLE] mode, or something with that. Yeah, that's what I want to say about that. So, today is [INAUDIBLE] play test day. It's almost 2:00. We had all our parts up here. So, how about we start play test at 3 o'clock? That should probably give you enough time to be able to get something playable by then, I'm hoping. Figure out what you want to test, how you're going to present it to people, and what questions you need answered to [INAUDIBLE]
|
MIT_CMS608_Game_Design_Spring_2014
|
11_Defining_Game_Play_and_Sport.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: OK. I'm just going to get started. First, a little administrative stuff, definitely fewer people signed in on Monday than actually showed up in class. So there were a bunch of people who just forgot to sign in. I tried to check off people that I remembered being in class. But if you were in class and you think that you may not have signed in, I have Monday's sign up sheet here. So this is up front. I'm not going to pass it around. You can just come up here and sign it if you need to. The actual attendance sheet is, I believe, on the table right there now. Yeah. So make sure that gets handed off to anybody who comes in later. And I'm going to start off talking about the reading. We're going to play some games that are related to that reading. And then we're going to take a break, do your set up, and then you'll be play-testing with each other. All right, who is on a team who doesn't have a game yet? You're waiting for team members to bring it in. You don't think you're going to have a game ready today. STUDENT: We don't have a game. STUDENT: We don't have our board, but we can do it on tiles again. PROFESSOR: You can do it on tiles? STUDENT: So far we've only play-tested in the hallway. PROFESSOR: OK. STUDENT: We actually bought a board, which is on its way. PROFESSOR: OK. On its way is fine, because we're going to do play-testing in the second half of class. But on its way, you mean like US Postal-- STUDENT: On its way as in on a truck. PROFESSOR: Oh. STUDENT: We've only played it on tiles. So far it's worked. So we can just do it on tiles. PROFESSOR: On tiles? You mean-- STUDENT: Like out in the hallway. PROFESSOR: Oh, because it's a live action game. STUDENT: Yeah. PROFESSOR: Oh, all right. That makes sense. Actually, that will work fine. All right, so today's reading was partnered. And some of you may be surprised that we jumped all the way back to chapter 1. But it's kind of revisiting an old topic, right? First of all, he talks about the basic concepts of games and play and sports. And I kind of want to go back to that. But I've already voiced my own opinion on "what is game" question. And I think it's not very productive to talk about what is a game, because then you now have to decide what isn't a game. But what I do like to talk about are all the different things that play and game can mean-- especially in the English language. Even in Parlett he always talks about how it's a little weird that we have two different words with two different origins to mean very related things. But as a result of that, these two words have kind of been used and reused to describe a whole bunch of related-- but not quite the same-- concepts. And I just want to be able to run through that with you. So we have play. We have game. We have sport. So that's let's talk about what would be like the noun definitions. Like, what are all the things that play can mean, game can mean, or sport can mean when you are using it as a noun? STUDENT: Game as in-- I don't know. Like a fighter maybe, like to spit game. PROFESSOR: Uh-huh. To spit-- to spit back game. So that is-- OK. I'm going to use the word swagger, because I think it gets the concept across well. Yeah. STUDENT: Like gaming the system. You know how you're like-- PROFESSOR: Well, that's kind of verb-y, but yeah. STUDENT: Oh yeah, that's right. PROFESSOR: To game-- which means to-- STUDENT: Sorry? PROFESSOR: How would you describe that concept-- to game the system? Gaming-- gaming the system. STUDENT: I don't know. Ways to cheat, or-- PROFESSOR: Look for loopholes, exploits, you know-- exploit, maybe. OK. STUDENT: Play-- it can be like a move, or like an action you take. PROFESSOR: Like a baseball player. Right, you know-- that was a great play. STUDENT: Yeah. PROFESSOR: So like tactics almost, right? STUDENT: Yeah. PROFESSOR: No wonder you actually have to execute. I'm glad that you caught that. It's something that actually had to have happen. STUDENT: Like a theater performance. PROFESSOR: A theater performance-- so also stage, or opera, or something like that-- a stage play. What else is? STUDENT: Play as in not that serious. PROFESSOR: OK. STUDENT: Playing around. PROFESSOR: So child's play. You know, this idea of the thing that kids are doing most of the time. That's playing in some sort of general noun sense. OK. So I'm going to go an put child's-- STUDENT: Like pressing play-- for a video or starting like a game. PROFESSOR: OK. So play as in to initiate a sequence. Sometimes overlaps a little bit with this. But yeah, you know, it will do. Because you can use play to describe something that happened in the past, or to make plays, and to start a play. All right. Or press Play to-- you know, actually, that's-- instead of initiative, maybe it's like this icon, right? STUDENT: Yeah. PROFESSOR: This icon on the VCR-- no one uses VCRs. How about on a QuickTime streaming window. All right, so what else? No one's touching sport here. STUDENT: Sports you may like to show off. Like if you're sporting a [INAUDIBLE] thing. PROFESSOR: OK. So like sporting your colors. STUDENT: Yeah. PROFESSOR: OK, so like a display of some sort. I'm going to put that more in the verb side of things, OK? STUDENT: It could be like a nickname for a young-- PROFESSOR: OK. Sport. Yeah. STUDENT: It can also be like a good sport, so like a nice game. PROFESSOR: So some of those virtuous kind of things. Yeah. I'm not sure if I got that right, but-- STUDENT: It can mean like fun or hobby-- like you do something for sport. STUDENT: Fishing for sport. PROFESSOR: Fishing for sport? Yeah. STUDENT: Like hunting for sport. PROFESSOR: Fishing for sport as opposed to sport fishing, right? Those are, you know-- [LAUGHTER] That's another pun, but never mind. So for fun. So I mean it's funny, because often it is used to make something sound like it's for low stakes when it actually is very high stakes. Like, you know, oh, we're going to have people for sport. [LAUGHTER] I just think of like 18th century literature when it comes to like, he plays at relationships for sport, or something like that. You know, it's like something that's supposed to be high stakes, but it's not. OK, so I think maybe instead of for fun, I'll say for low stakes or no stakes-- which is bizarre. Because sport is usually a very high stakes thing. What else? STUDENT: People think sport is something athletic. PROFESSOR: Mm-hmm. OK, so there's a whole section in Parlett where he talks about the possibility that the word game might have come from the bending of knees, because gam is a sort of Welsh root for a leg-- gam, cam. But that's nowadays more associated with some sort of-- physical activities and sport tend to go together. What else? Like something that you hunt-- like that sort of game, like-- PROFESSOR: Like poultry? Yeah, game as in animals that you hunt. Yeah. STUDENT: [INAUDIBLE]. STUDENT: I guess this is more of an idiomatic expression, but people tend to use game when something is also low stakes. Like, you've heard this as a phrase-- "do you think this is a game?" PROFESSOR: Yeah. So again, something that is low stakes or is inconsequential, almost. Like there is no penalty for having done this thing. All right, what else? STUDENT: [INAUDIBLE]. Well, along the same lines, what if you're like specifically making a joke out of something-- like you've made is a game to do that? PROFESSOR: Um, hmm. STUDENT: Like that's sort of like an intention of mocking or-- PROFESSOR: Oh, hmm. I feel that's related to this one. STUDENT: Yeah. PROFESSOR: Because they're kind of-- when you make a game of something, what you're really doing is kind of belittling it, right? STUDENT: Yes. PROFESSOR: So I think that's just a different application of the same thing. STUDENT: But when you're making a game of something, aren't you also making a structure of it? Or is that different than what you're saying? STUDENT: For me? STUDENT: Yeah. STUDENT: Well, I was saying from like the sort of a mockery sense of it. Like he made a game of our process, or something like that. STUDENT: OK. PROFESSOR: But I think making a structure of it is a different application, right? It's like you are applying rules to something that might have not needed it, necessarily. But, you know, now you're come up with rules. I'm going to make a game of tipping, right? Because we have five people, [INAUDIBLE]. All right. STUDENT: It can also refer to [? hooks. ?] Like before the Superbowl, everyone talks about the big game. PROFESSOR: Big game-- right, OK. I'm just going to say the big game, or the game. STUDENT: And I'm also thinking of the phrase like it's all part of the game, where someone might do something-- PROFESSOR: Are they a sports lover? STUDENT: No, isn't that skin in the game? PROFESSOR: No, it's not that-- even in sports, it's all in the game, right? STUDENT: Even in sports it's in the game. PROFESSOR: But skin in the game is almost certainly a morsel of some sporting good that has-- STUDENT: It's all part of the game. PROFESSOR: It's all part of the game, yeah. STUDENT: Like when someone does something unexpected, or maybe you might think it's unethical, or slightly immoral, or going out of the boundaries of the-- STUDENT: Are you thinking of Game of Thrones? PROFESSOR: OK. STUDENT: I'm thinking of like a lawyer doing something backhanded. And you know, like the prosecutor is surprised. And the defense attorney is like, it's all part of the game. PROFESSOR: Right. Yeah, so there's just sort of like this bounded space where things are permitted-- like specific kinds of things that might-- this introduces a concept that-- it introduces the magic circle. I'm not sure if we have any reading that touched on this-- but this idea that a game is this bounded space where you can do things that you wouldn't necessarily be allowed to do in real life. And, similarly, consequences that happen inside don't necessarily apply outside-- like it's incredibly important to hold it inside. Like when the position of a ball is generally meaningless outside of the game. But inside the game, it's everything, right? But you know, you can body check somebody in a hockey rink. You know, you're not really allowed to do that on the sidewalk. So I think that gets close to what you just told about your lawyer application. You're allowed to do this within these parameters, because this is how the game is played, all right? Whereas, I'm wondering about the game-- there's this concept of the game, as in the big game, like a Superbowl And I'm wondering is that only used in broadcasting? Or is that-- STUDENT: So there's certain rights. So you can't say like Superbowl without getting permission of whoever has the rights to that name. PROFESSOR: But even something like "are you going to come over to watch the game", which doesn't necessarily mean that it's the Superbowl STUDENT: Right. PROFESSOR: Like there's this weekly thing. But it seems to be very TV related. "The Game" as this thing that you see on TV is this event that's happening at a specific time, rather than football in general. STUDENT: There's some crossover, then. Because like the stage aspect-- because you're watching it-- but also the athletic aspect-- PROFESSOR: Yeah it goes back to this. STUDENT: --because it's a sport, and you watch the game. PROFESSOR: Yeah. So I'm just going to put TV next to that. OK. You had your hand up for a moment. STUDENT: Yeah, I was going to say games as like skill, as in like he's got game-- sort of like skill or ability. PROFESSOR: I think that goes back to the swagger a little bit. STUDENT: Yeah, kind of. STUDENT: But swagger's like attitude, whereas this might be skill. STUDENT: I guess if you go by-- PROFESSOR: The assumption, I think, is that those two are related. STUDENT: Well, one is the appearance of skill, and the other is having skill. PROFESSOR: Oh, spit mad game versus got mad game. OK. All right. OK, OK. So actual skill versus the portrayal of that. STUDENT: The game, like as a pick-up artist thing. PROFESSOR: Oh, that's sort of the dating game, basically. It's kind of like-- I know that in those books that's very specifically called a game. But I think that the use of the words "the game" to describe dating in general predates that book by far. STUDENT: Play could also be referring to sports. Like in football, there's a certain book that you [INAUDIBLE] plays-- a play book. PROFESSOR: Yeah. That was kind of like defense, right? But you're right. You can write these things down. You can talk about them as a library of things that you can do. STUDENT: There's the Great Game. PROFESSOR: The Great Game? STUDENT: Was it the Afghan War or the Crimean War? PROFESSOR: Oh. So now we're thinking part two. STUDENT: Yeah. OK. So like the England versus Russia-- or Napoleon versus Russia-- I forget exactly the years and dates. But it was a social political conflict going on. PROFESSOR: So there are specific wars that are referred to as great games. Game of Thrones probably is related to that, too. Although, that includes political things. I mean, even just on the noun side, we've gone through a lot. If we include adjectives-- and we've brought up a couple of adjectives so far-- part of those to talk about how some people can say that you have a [INAUDIBLE], which means you actually have [INAUDIBLE], which is kind of weird. But it's his extrapolation of where the word game might actually have come from Celtic origins rather than from Latin origins. How about verbs? Sporting-- to sport-- well, I guess we already did to sport. Although-- to game, to play, [INAUDIBLE]. STUDENT: Like playing [INAUDIBLE]. PROFESSOR: Yeah, exactly. STUDENT: Like to execute. Like if you make a play, that's like you're executing it. STUDENT: Building on that, it could be like a performance-- like to play an instrument. PROFESSOR: OK. STUDENT: When you play [? a part ?] you kind of execute it. STUDENT: You're executing a series of things. You execute a stage play [INAUDIBLE]. PROFESSOR: But to play something in a game seems like a much more time-limited thing than to play a musical instrument. It seems kind of like a lifetime thing, almost. Like I play something. I play the clarinet has a very different connotation from I play this queen, or I play this card. So those are two-- probably on the same spectrum, they're like almost on opposite ends-- closely related, but on opposite ends. Sporting? Sporting I generally associate-- again, sort of like "to be a good sport". But this is more like fairness-- sporting-- give someone a sporting chance kind of thing. If it's just to finish. STUDENT: It actually might be. PROFESSOR: It's funny how this seems to be like the only connotation of game as a verb, right? But gaming is also kind of associated with the gambling industry. It you go to a gaming conference, what you actually mean is that they're talking about stakes-- stake games, roulette-- STUDENT: What about like to game-- like I play games. I game. PROFESSOR: Yeah, so that's a fairly recent connotation-- usually in association with digital games, but not always, right? So to play. So we can say to someone, I am a gamer. It's not implying that I look for loopholes in everything that I can do. So I'm pretty sure we could keep on going for a while. I just think it's kind of neat that we have all of these different terminologies that all use the same words. It often can get confusing in sort of casual speech. Specifically, it doesn't get too confusing when you're talking to fellow designers or other people in the game industry. Because the context of which they're using any one of these words, to me, is pretty clear. And you're always thinking about the context when you say, you know, I game the system, or they've got game. You know, it makes it very clear. But I do think that Parlett kind of slightly opens the box. He tries to cut it into a sort of a cross section of all the things that these words can possibly mean. Yet, it's even richer than that. And the reason why he does that is so that he can actually get into the topic of his book, which is board games specifically. And he starts breaking it down. He's like, all right, now that we know that games and play and sports and everything are all kind of fuzzy, and muddled, and not really very clear, how do sort of at least clarify what we're talking about when it comes to board games? Anyone remember the five things that he ended up with-- the five categories of board games that he kind of ended up with? Four of the rhyme, one doesn't. STUDENT: Race games. PROFESSOR: Race games. STUDENT: Space games. PROFESSOR: Space games. STUDENT: Chase games. PROFESSOR: Chase games. STUDENT: Displacement. PROFESSOR: And displace, which he admits he only gave that name because it rhymes. And then number five? I think he just give up at this point. It was interesting. Because he was kind of saying like here are these four that, traditionally, other scholars like to study, because they are bereft of theme. And they look at themed games, and they say, like, that's not worthy of studying. There's this whole chunk. And Parlett says, wait a minute. First of all, most of games that you'll buy off the shelf right now are probably themed games. And the reason why there is this interest in race games, chase games, displace games-- space games? I think I'm confused. He also describes those more as positional games. And positional games of the sort that scholars like to study tend to be folk games that have been handed down from generations. Of course, he's narrowing it down to board games. He's not including things like card games, which are also a very rich game tradition. He has a whole separate book-- that I think we also get to in this class-- on card games. But this is just specifically his board game work. But he wants to start looking at theme games. And this was him introducing scholarship on theme games later on on his book. I just want to make you realize that the whole idea of studying games as a product-- we already talked a little bit about how the idea of a game as a product to begin with is fairly new. We're talking about 1900s when that starts to become a thing. But then the idea of studying something like that-- I think we're talking about like '50s and '60s-- very, very recent-- the idea that games as a product is something that we can study. Whereas things like games as a sport can probably go back a little bit further than that. Statistics, for instance, has been obsessed with baseball for a long time, because it's very rich and we have good products. So I don't want to get too much into how he arrives at those definitions. You know, he takes a couple of definitions that other people have come up with and tries to build on them. Like all scholars, he citing previous work. But what I'd like have us play today are what I feel are a couple of modern takes on those categories that he came up with. So in the race game-- what's a race game, if you have to describe it in one sentence? STUDENT: You get to the end point first. PROFESSOR: Get to the end point first, right? So Cartagena-- has anyone played this? OK. So if you look at the game, it feels a little bit like a tile-laying game. Because what you're really doing is you are trying to create this path from the jail that is keeping you-- you're a bunch of pirates-- from the jail that is keeping you all the way to the boat. And then you can run off scot-free. But one of the core mechanics is that every player controls a different set of pirates. You have a bunch of different colors. You're all trying to get to the boat first. So the first person to get your entire gang of pirates to the boat wins. So it's a race game. You're racing against other people. It just happens to be the track that you're racing on happens to be something that you build over time rather than something that's pre-determined. So there's something they'll take I think. Let me skip over to read the other things. For displaced games, has anyone played Twilight Struggle in this room? OK. I may need you to help explain this game to other people. STUDENT: OK. I've played it once, like at the end of my first semester. PROFESSOR: OK. All right. I'll just take a look at the rules again. This is unfortunately a little bit of a tall order to ask someone to pick up Twilight Struggle in class. But this was looking at ease of the design system. This is a war game. You are given a map of the world. STUDENT: Oh, gosh. PROFESSOR: OK? It's the Cold War-- 1945 to 1989-- and there's a whole bunch of things that come from modern game design, like you have event cards that recall important things that happened-- the Cuban Missile Crisis, Korean War, Pershing II rockets deployed, things like that. But again, it's about influence across a space, which just happens to be laid out on top of the map of the world to regions like the United States of America, to South and Central America, to Western Europe, Eastern Europe, and so on and so forth. So you're deploying influence points, which is not all that different from deploying what he calls mans-- men-- mans-- across a space, and trying to be able to push influence towards your faction, rather than to the opposite, depending on whether you're playing the US or the soviets. So displace games-- hold on, are those displace games? Well, let me just set up this one too. Scotland Yard-- I think this might have been-- this was what I intended to bring out as displace. But I'm not quite sure that you capture any pieces in this game. Do you recall? Do you capture things? STUDENT: You capture influence. You trade influence back and forth over-- PROFESSOR: Right. You capture it, but then you can reintroduce it later on. STUDENT: Yeah. PROFESSOR: OK. So this has been displaced. Scotland Yard, which I believe we have mentioned in class before, is kind of like the evolution of the chase game. Chase games are super old. Some of the more famous ancient ones are Scandinavian in origin-- a lot of Norse-- a lot of Viking games-- where you have a King. You have a couple of bodyguards for the king. And you've got a whole bunch of low level pawns all trying to flank the king. That's the theme of a whole genre of games called Tafl, of which chase games will form a big part of. In this game, you are a Mr. X on the run from a bunch of Scotland Yard detectives. The map is London. More importantly, it's the London public transport system. And what you're trying to do is you're trying to go underground. You're trying to grab cabs, take buses, and conceal where you are while the rest of the players were all controlling the detectives. Actually-- yeah, three to six players. The people who are controlling detectives are basically just trying to flank you. And so-- a nice little modern evolution-- they have a neat little doodad, which is the way to be able to track the moves that you made without revealing it to your opponents. STUDENT: Is the bad guy's position private or public. PROFESSOR: It is private. But little bits of information pop up. So you reveal that in the public. Empire Builder is a train game which one of our grad student alums now almost exclusively studies-- but a very interesting kind of training game that you may not have heard of, called crayon rails, where you are given a big sheet of plastic laminated board-- I guess it's kind of like a jigsaw-- and a grease crayon-- a bunch of grease crayons, actually-- Crayola washable crayons. Here we go. And you are just drawing your rails across the map, trying to establish your railroad empire. So this was the one that I was thinking of as space, because it's all about occupying space. But you're occupying space with a network, rather than occupying space with just pieces that you're placing down. You're creating lines across the United States while you are do that. So I think this particular genre of train games are an interesting take on space games. There's a whole other genre of train games that's really more about the economics of what it's like to be a robber baron train lord. Those are really fascinating as well. And those fall kind of more on the economic sense, which Parlett doesn't really address. Or maybe those fall under his definition theme games. STUDENT: Yeah, I think they do. PROFESSOR: Yeah. And Power Grid is just this huge mishmash of everything. But I am using it as an example of a theme game, where if you play any of these games and you play Power Grid, you'll see their connections. It's a game where you play on a network of Germany. And you're basically trying to create efficient links between different power stations that you are building. And it's not all that different from creating a railroad empire. You just happen to be trying to create an energy generating empire that's going to use different commodities-- like oil and-- I think you can burn refuse. Yeah, you can burn refuse. You can burn coal. And you're all playing on the same space. You can occupy space. So it's kind of like a space game, because when you take a space, no one else can take that away from you. But most of this game is about the economy rather than about the specific positions where you place things-- although, that's going to influence your economy. So it's a complicated game. It's very well designed if you have the right set of rules. The rules that come in the box have misprints in them. Specifically, they break things into turns, phases, and steps. And then they kind of don't use those words consistently. If you are interested in seeing how fairly straightforward rules can be really, really confusing, take a look at the rules of Power Grid, then take a look at the printed rules that I printed out from the printer. Because these are the revised rules from Board Game Geek that are basically translations from the German rules. And they're a little bit more consistent. It is a complicated game. We might not be able to get through a full game and learn the game at the same time. Has anyone here played Power Grid? I encourage you to set up the game to help more people do it. All right. So what we're going to do-- actually, any questions before we break out into game groups? No? OK. So we'll play this until about-- actually, I think all the way until 3:00 probably, or at least 2:45 or so. STUDENT: We'll call time [INAUDIBLE]. PROFESSOR: Yeah. We'll have about 15 minutes for you to get all of your play-tests ready. And then the last hour is when you're play-testing. Cool.
|
MIT_CMS608_Game_Design_Spring_2014
|
15_Assignment_3.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, I'm going to start handing out assignment three. And on the back of assignment three are the changes that we've made to the syllabus, which is mostly in the order of reading, but also we've lifted some of the games that-- we've [INAUDIBLE] some of the games that that we're going to [INAUDIBLE]. Because basically we change the assignment three in the middle of the semester. Who actually read ahead to figure out what assignments are you on? AUDIENCE: [INAUDIBLE]. PROFESSOR: How long ago did you read ahead? AUDIENCE: Like two weeks ago. PROFESSOR: Two weeks ago. OK. So whatever was online is different from what it is now. AUDIENCE: OK. PROFESSOR: But if you just read like the one line description-- AUDIENCE: Yeah. PROFESSOR: It's that. But all of the text that we wrote up completely changed. So this is [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. PROFESSOR: What? AUDIENCE: Are there any [INAUDIBLE]? PROFESSOR: It's mostly rearranged. Mostly just scheduling, rather than the actual readings. I don't think we introduced any new reading, except for maybe [? MO ?] 7, which is [INAUDIBLE]. That's been on our website for a while now though. AUDIENCE: I did all the reading. PROFESSOR: Yep. So I don't know if you've seen that one. [INAUDIBLE] Simulation 101. It's like a series of webpages. It's all online on the-- AUDIENCE: Yeah. PROFESSOR: OK. All right. So you better read it. So I'm just reading this because people are still cuttting things up. So I figure I might just talk. So assignment three is going to be about applying everything that you've learned so far. Remember, assignment one was kind of just getting used to this, the discipline iterating. Of prototyping, of iterating. I feel that you've all shown like a lot of good experience and skill in doing this. One thing while I saw those presentations it reminded me that we actually have notes about all of your individual write ups, that I should copy and paste into the grading section of the [? fellow ?] webpage. AUDIENCE: I'll do that next. PROFESSOR: Oh great. [INTERPOSING VOICES] PROFESSOR: That'll be some help. Yeah, I just forgot to do that. So you'll also be getting a little bit more detailed feedback on your individual writeups. Like, in terms of the game, designing things are coming along pretty well. And if you want more feedback on the individual elements of each game that you worked on in the past we could totally get that to you. I'm just guessing a lot of you are probably more eager to just move on to the next project. So I can talk about that. Remember, assignment one was about the skill of prototyping and iterrating. Assignment two was about trying to come up with, on some part holistic aesthetic experience. And assignment three we're asking you to apply that to the end of trying to depict the perspective of somebody who lived all this in the real world. That person could be a very specific person, like I'm going to bring up some examples here. Campaign Manager 2008 specifically states that you are the campaign managers of either the McCain or the Obama campaigns, that you are-- that's one person. Now, the powers of what you have in the game is really a whole network of campaigners, but you're the person who is calling the shots. You are trying to win the electoral college though this. We probably don't have time to play this, but anyone who wants to take a look at what's in here, all the bits, the way how each state is represented, how the United States is represented, the fact that you're basically just trying to win points among demographics by raising hot button issues. That's all this game is about, [? and it's ?] putting you in the shoes of a very specific person who actually exists in the real life. An alternative will be something like Tulipmania 1637, which is a game about-- anyone want to hear about the Great Tulip Crash of 1637 in the Netherlands? AUDIENCE: [LAUGHING] AUDIENCE: [INAUDIBLE]? PROFESSOR: One of the first bubble markets-- who [INAUDIBLE] bubble markets? You've got to have heard about it in the past four years-- eight years. 2008 was a real long time ago. Oh, geeze. OK. AUDIENCE: [LAUGHING] AUDIENCE: It's OK. Next one's coming up next year, so it's cool. PROFESSOR: Oh, yeah, I read those articles, too. So this is about prices of tulips, flowers, being driven up by speculation to horrendous amounts and trying to be smart and exiting at the right time and often not being quite pulling it off at the right time and losing your shirt in the process. And this game puts you in the shoes of a [INAUDIBLE] speculator, someone who is trying to play this market at that time, when the market is behaving in this very, very volatile, incredible increases and yet even faster crashes. [? It's ?] actually designed by a professor in Syracuse University, Scott Nicholson, who runs [INAUDIBLE] with Scott, and it's pretty awesome. If you like playing it, you can see the little token, little flower heads, and you get to do things like try to convince buyers to buy at a higher price than what you had originally bought it at. Things like that. [? Care to talk ?] a little bit about E3? GUEST SPEAKER: Yeah. So I chose three games. So what we're asking you to do is choose a real world historical, political, cultural, economic situation, so you might be choosing some problematic situations like the conquest of Africa by Europeans. So it's a game about African exploration during the 19th Century. It's made in the 70s. They tried to be less problematic. They were not entirely successful because the point of view of this game is entirely of the European explorer entering this forbidden Africa and finding things there, and basically finding Zulu tribes, which were not all throughout Africa, of course, and fighting other explorers who had died there previously, biblical things like King Solomon's mine, things like that. But mechanically, it is an interesting way to talk about exploration, not so much about the bigger issues around what imperialism was, what that kind of conquest was. If you're thinking about doing a live action game, we've got a couple of different versions of how to do live action games. It can be a live action playing game. It can be a party game, a conversation game. This is a book called Heads of State: Nine Short Games About Tyrants. We've got-- I haven't played many of these yet, but we've got, basically, the first game here it is called Coup D'etat. GUEST SPEAKER: Basically, why are dictators what they are? How did they become a dictator? In this case, Gaddafi's idolization of Egyptian president Gamal Nasser, reading Nasser's philosophy of the revolution, thinking about plots against the Egyptian monarchy, but also entering into the Military Academy in Benghazi in 1963, and then having other officers in training with him organizing the group that's going to overthrow the pro-western Libyan monarchy. Basically your dictators in school, learning how to be future leaders in one of these games. So what this book is trying to do is try to give you multiple different viewpoints of tyranny and of those kind of problematic things. There's communism in here. There's maoism in here. A game about the disappearances of people, so if you're thinking about making a game about Argentina in the '70s, good luck, but they tried it, so you can see what they did. And then another version is Dog Eat Dog. It's a game about imperialism, in particular the assimilation of the Pacific Islands. The great thing about this game is you've got people who are acting as the new colonizers, but you've also got people who are acting as the native populace, and the different interactions that happen there. Basically, if you decide you want to do something that's coming to-- trying to do something more serious, if you try to do something with more of a problematic tone, you might want to consider having multiple different perspectives, or saying a similar perspective, but at least understanding what the other different perspectives are of the people who are inside of that system. It's just good to take account of-- PROFESSOR: I think if every player is taking on the perspective of a single person, that's OK, even if different players in the same game are taking different perspectives. I think that might complicate things a little bit because, again, you end up in the situation where you're designing basically two very different kinds of games that are interacting, but some of you have had experiences with it and made it work. So it's certainly not out of the question. What we don't want is a situation where you are simultaneously two people, right? It's like I am controlling the survivors and the zombies simultaneously. That's a little bit weird. Yep! AUDIENCE: Can we use a circle persona who has multiple personalities? [INAUDIBLE] PROFESSOR: That is-- We can talk about that, but good luck. AUDIENCE: So if it's really about the [INAUDIBLE] interesting way of pulling it out? Maybe there are multiple players playing those individual personalities? I think the biggest focus is the system, right? PROFESSOR: It should be less about the individual person, because I get the perspective of that person, but more about the conditions in which that person lived in. GUEST SPEAKER: Yes. AUDIENCE: [INAUDIBLE] game before where you could be the president of multiple corporations. And like, in one corporation you'd just like embezzle all their money [INAUDIBLE] GUEST SPEAKER: That sounds like [INAUDIBLE] PROFESSOR: Yeah, and that's where ostensibly on a lot of the [INAUDIBLE] games, your actual role is one of a robber baron, of a capitalist who lived in the 1800s, who owns shares in multiple companies. You're not really the president of a single company, but you may take on that role somewhere in the middle of the game, but your role is not defined by being president. Your role is being defined as being a person with a lot of money in the 1800s, and trying to increase that wealth through railroad systems. AUDIENCE: Can it be satirical? GUEST SPEAKER: Yes. PROFESSOR: Absolutely. Just because you may take your game to make a game about a serious topic doesn't necessarily mean your game needs to be dull or somber and I've got some examples of that. So how many of you play Crunch? [INAUDIBLE] Nope? Two-person card game. You are a CEO of a bank. You are trying to get out of the bank with as large a golden parachute as you possibly can. To hell with the health of the bank itself. This game encourages you to take advantage of the government's generosity in bailing you out. It encourages you to hide cards on your body to secret fund your own personal wealth, to be able to take money out of your own bank and put it into your own personal accounts, so that it becomes your fortune rather than the banks fortune. To make really risky loans that may never pay out. It came out in 2009, so you can understand why this game was made. This company, Terrible Games, basically makes satirical games that comment on real world situations, but this one is very, very clearly, you are a banker. Well, the satirical version vision of a banker. They have another game called the War on Terror, which I did not bring up because it's a little bit unclear who you are in that game. You're supposedly a world power, but are you the President? Are you the government? Sort of like all of the government at once. Or are you some sort of media construction of what that country is because you can get branded as evil, for instance. [INAUDIBLE] evil in it. But then that doesn't actually change your role in the game, which is I think part of the point of game. The point is that whether you are a terrorist state or not you kind of all engage in the same sort of activities anyway. Just being branded as evil just let's you do it with a little bit more impunity. Anyway that game's not a great example of what we're looking for because who you are in the game is not quite well defined. AUDIENCE: Are these games usually a set number of players or do they have [INAUDIBLE] range like 2 to 4, or is it [INAUDIBLE]? PROFESSOR: This is a two-player game. Specifically, Campaign Manager 2008, again a two-player game. Tulipmania has a range. It really depends on your ability as a game designer to design for different people, but for this assignment in particular we're asking for two to four people, but a specific number. So you can design a two-player game, a three-player game, or a four-player game, but raid you don't need to make the game playable by a range, and that means you can tune it very specifically for a fixed number of players. The gentlemen of the South Sandwich Islands is an absurd game of logic discovered in 1821, which is not true at all. It was really kick-started as a project about three years ago. Four years ago, maybe? Yeah. This is interesting because this is probably right at the edge of what I would consider acceptable for classical. This is a game about sort of fantastical courtship where you are trying-- where you are two gentlemen trying to win the affection of ladies who are walking around this island, but and also there are-- there-- what's the word? the handlers AUDIENCE: What do you call it? [INAUDIBLE] chaperone. PROFESSOR: Chaperone. Yes. But the chaperones are also following them around here trying to get them alone, so that they can have a quiet talk with them. Kind of the game of Jane Austen. Now Jane Austen, of course, was writing about some of the social situations, but every single character that she wrote about was fictional. But, the world that she was writing about is at least plausible. And it's based on [INAUDIBLE]. You want to look at something and make the game set in Dickens' Oliver Twist or something like that, you can. You could be individual people in that setting. Do pay some mind onto how grounded in reality those things were. The more grounded they are in reality, the more different sources you can pull out for inspiration. If it's just like, well this was just the fantastical invention of one person in this game or something like that, then you'll only have Scott Card's writing as a reference. Whereas with Jane Austen, you can actually look at real historical stuff to be able to get more ideas about how that social system works and actually Jane Austen's books are actually these social systems. And they're incredibly complicated. But this game is really just about walking across bridges and being on a very small island, so this is a sort of a holiday island or just trying to get quiet talks with ladies. Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Yes, I don't want a game that takes more than four people to play definitely. GUEST SPEAKER: For us to grade, it's hard to get more than four people. PROFESSOR: Yeah, you will be getting your grade in August if you give me a game that requires six people. I can't get that many people in a room. GUEST SPEAKER: You want your team to be no bigger than four or five still. PROFESSOR: It's easier to test [INAUDIBLE] at least [INAUDIBLE] the game. Something to think about is agency. Not just how does the world look like to that person. Like for instance in Source of Denial you are colonial European power for the most part right? And you're seeing the continent of Africa as this resource rich land to grab stuff out of. That's the perspective. But then what are the verbs? What are the things you can do in that context? In [INAUDIBLE], you are in communist Poland in the 1980s. Poland? Poland, right? Yeah. And the only thing that you can do is stand in line for rationed goods, but what you can do is you can affect your place in line by let's see carrying around a cute baby is one of them that are on here. I believe you can knock somebody's hat off or you can change which line you're standing on so maybe instead of waiting for dry goods you are waiting for furniture. That's our thing. It's kind of neat because all the goods that we actually have in here are actually pictures of the actual rationed goods that were available in Poland in 1980s. In fact this was a project that was funded by some Polish historical institute. But the only agency you have in this game is, which line are you standing in and where in line are you standing because that's the perspective that they want to get across to you. And so your whole game is all about jockeying for position. And it's actually a pretty well-designed game. We played it a few times. That has both the perspective of this is what the world looks like. The world looks like a bunch of lines for rationed goods to me as someone who lives in that world. It's a kind of satirical lighthearted tone, but it's grounded in reality as it's the only thing I can do. What can I do about this world that's going to give me some interesting decisions to make, right? And it's going to be which line do I want to wait for? [INAUDIBLE] again, historical-- I can't remember which decade this is in, though. AUDIENCE: 1890s is it? PROFESSOR: I think that sounds right. Somewhere in the rules they mentioned it and it's about Manhattan. It's about ruling New York where it says it's a game of backstabbing, corruption, temporary alliances and you're basically just running for mayor. And if you can't get mayor, you're running for other city offices. Turns out that actually as soon as you become mayor, you have a giant target painted on your back and everybody else who is in other positions, but you can become chief of police, precinct chairman. You then abuse those powers to sort of place new immigrants in different worlds and change the demographic makeup of different worlds of different boroughs in New York to be able to change the way how they vote. AUDIENCE: [INAUDIBLE] PROFESSOR: The object of the game is victory points for the most part. But the victory points you can get it by becoming mayor, but then you're not staying there for long. AUDIENCE: [INAUDIBLE] machine and trying to control it? And the agency that you have is really just you were able to coerce people to come to various locations and you're trying to get the vote out. So [INAUDIBLE] were 14 people [INAUDIBLE] or doing things like having free food [INAUDIBLE] things like that. PROFESSOR: You could possibly evict people from boroughs and move them into other boroughs to create not ghettos, what am I thinking? AUDIENCE: [INAUDIBLE] PROFESSOR: [INAUDIBLE] the districts, right? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, instead of changing the minds, they're actually moving people. Yeah because you know if you're chief of police, you can just grab people and put them somewhere else. You can throw them off New York. So that's their view. Yeah that's a perspective they are trying to place, this abuse-- they're asking-- they're saying that in this era if you're one of these people you are actively engaging in the abuse of power to sort of play with the lives of immigrants. So, yeah, it turns out that it's just more of a game of territory. When you actually play the game you are looking at Manhattan as a bunch of resources that get moving around, but it's a very specific perspective and I think the game is played over a period of 16 years, which is [INAUDIBLE] or something that. So check it out when you get a chance. Any questions about assignment three? Now one little wrinkle is on April 16th, which is just two weeks from now we want you to come in here and give us a pitch presentation on what you've decided to do with your project and we want to see a concise description of how you expect it to be played at least two weeks into the project at what you decided so far. Because [INAUDIBLE] you want to create so you want to tell us what kind of feel you want to get out, whose perspective this is. What type of agency that you're likely to have in the game. Very important to tell us where you are drawing your historical references from if it's a history game, where you're drawing information from. If it's a contemporary game what are you using to inform this thing, your experience. Now you could theoretically all make a game about being an MIT student at MIT in 2014. I'm going to encourage you to think a little bit broader than that because you don't need any additional information. And that's why it's kind of boring to get that degree. You can draw inspiration from other games but also books, websites news articles, not just Wikipedia but Wikipedia can be a useful resource to find other stuff but we want you to give that presentation as if you were seeking green light as if you were going to a publisher saying, please give me money to be able to continue work on this game. We're not going to fund you down we're not going to cancel your project but we are going to give you feedback both on how you can improve the project on what we've heard, also on your presentation skills. That's going to be one of the few time in this class where we're talking specifically about how you deliver the presentation because you might have to do that one day and there are other classes especially [? CMS 610, ?] where you will get a lot of practice doing pitching so this is a little taste of that if you want to be able to get more practice I encourage you to look at that class. everything else at the end of the semester you're going to do the same sort of a presentation you gave today. We expect change logs. We expect a one page write up. The one big difference is that everything needs to be handed in at the same time. Previously you could hand in your write up the following week, I believe that's the case for this assignment. We can't do that for the final assignment in a MIT [INAUDIBLE] because of MIT course rules. You can't hand in anything late and only the class can ask you to hand in anything later than that. So everything is due on the 14th. Finally, we have a bunch of guest lecturers that will be coming in. This is starting to segue into what you do with the information that you're getting in this class after this class is over. April 9th, exactly a week from now, will be the Game Makers Skill people who actually are a collection of people here in Cambridge who make board games and card games for retail, but have been funded through a range of different sources all the way from traditional publishing model to Kickstarter, and they all talk about their experiences and what it took to be able to take these ideas into something that people could buy off the shelf. We also have another guest lecturer from The Geneva University of Art and Design. These are mostly masters students and people who recently graduated from masters programs and are going to a design school that are thinking about game playing. They're mostly tech designs, so they're digital games for the most part but because it's an art school they're actually pretty broad minded in what happens in the computer and what happens outside of the computer. So that should be fairly interesting. They're gonna talk a little bit about their work. [INAUDIBLE] will be coming in May 7th. He is a theorist. He actually currently teaches at a number of places, including New York University. He used to teach here at MIT. He is probably best known among game scholars for writing his books on the Art of Failure, A Casual Revolution, [INAUDIBLE]. These are the three books that are about games and how-- not so much how designers think about games, but how to analyze games from sort of abstract perspective. [INAUDIBLE] If you're interested in those books, please feel free to check out our library and game lab. I'll be happy to let you have a look. They're really, really easy reads because he believes in padding out his books with images. He has told us this very specifically. He was like, "Well, I could write about half as much and then just have larger images and then the book becomes bigger. " He also makes games and he'll be here to not only talk about his current work and what he's working on right now, but also play test games on me, so I can give you some feedback, because he's taught here before, he's been part of this class a couple of years ago. So please don't miss any of the guest lecturers. They're great folks and give you an idea of what you could do if you want to go to design school, if you want to go to academia, if you wanted to make your own games and publish them, and make commercial product. Just give you some perspective on what you can do. All right who's games are actually ready at this point in time? Everyone's ready? Awesome. Let's play those games.
|
MIT_CMS608_Game_Design_Spring_2014
|
5_Imperfect_Information_and_Dice.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today's reading is kind of there for two reasons. One, the David Parlett reading is to introduce to you this book which is called The History Of Board Games. How many of you read the version from Stellar OK. Anyone get the book from the library? Because I think it's on reserve if you really want to read a hardback, paper book, you could just go to a library and you should be able to check it out, probably from the humanities library. So I'm just letting you know that that option is out there. So one of the reasons is; one, just to introduce to you a bit of what historical research looks like in game studies. There are people who have careers, like David Parlett, who basically go deep into the history of how some kind of game evolved. One of our old alums, Jason Begy? He looks at railroad games. I'm pretty sure he hasn't changed his thesis topic yet? Again, he's been looking at that for like three years, four ? AUDIENCE: Yeah. Three years now. PROFESSOR: And so they'll go to museums. They'll talk to archaeologists. In the case of railroad games, obviously, there are no archaeologists involved. But you talk to people who worked on classic game designs. You talk to retailers. You talk to people who actually owned or played some of these games. And I'm wondering if a lot of the games that are written in today's reading were kind of ancient-- really, really ancient-- games. But how many of you feel like you've actually played a game that sounded like one of those that were written up? AUDIENCE: Well especially they had dreidel in there. PROFESSOR: Yeah, they had the dreidel in there. And that's Purim and Hanukkah? Those are the two-- AUDIENCE: Just Hanukkah. PROFESSOR: That was pretty much just Hanukkah. OK. It's funny because when I was reading about the dreidel I was actually; like the initials-- a great miracle happened there-- I didn't realize that there was a different version if you played dreidel in Jerusalem, where a great thing happened there. Which makes sense for Hanukkah but I don't think-- that doesn't quite make sense for Purim because-- AUDIENCE: No, it's only Hanukkah. PROFESSOR: That's just to do with Jerusalem. All right what else? The other games, the State Board games or-- a lot of these games border on someone setting up shop on the street and then if the cops come they run away, one of those kinds of games, right? Let's see, what were some of the other things? They had a lot of lots. And one of the traditional Roman implements was the knuckle bone. I'm not quite sure if they mention what type of animal it was? I think was-- AUDIENCE: Sheep. PROFESSOR: Sheep or goat, yeah. But that's where the idea comes from. And the idea comes up quite a bit in game studies, at least since Holsinger's time, since I think the 60s, where it's introduced as a generic term to describe games of chance. I think the word earlier might also have been used like in traditional Latin scholarship to describe games of chance. But that's where you get phrases like alea iacta est. Does anyone know what that means? The die is cast. It is what Julius Caesar-- Julias Caesar says when he was about to cross the Rubicon, famously. And originally there was some confusion about whether that meant that the die, like dice, has thrown? Or whether that means that the die; as in the mold for something that you put liquid iron in, and then you cast it. But that still has the same sort of sense of, all right, It's out of our hands, right? It's already been done. So I get the same point across. But actually look at Latin, specifically talking about these knuckle bones. These knuckle bones that you are throwing which are called alea. So now it's up to date. But Parlett makes an interesting point that we are trying to simulate something that seems realistic. That even with your best efforts or with your worst efforts, sometimes certain things are out of your control. And simply just describing them as games of chance might be doing them a disservice. Because you see the same sort of randomization that comes up in computer games and things like combat. Especially strategy games, where there's a chance that something may fail, a chance that something may succeed. And when you think about games like war games simulations, they are obviously not trying to create this fantastic sort of fantasy element, unless you are talking about things like RPG fire games. But we're talking about World War II games and things like that, that are trying very hard to simulate something that's real. And one of the things that they're trying to simulate is sometimes you just don't have control over everything. And that's where randomisation elements-- this particular chapter is about dice and other physical things that you use. We included spinners. We included things that were basically binary. And that's actually a cute idea, if anybody wants to think about trying to introduce the Bell Curve to any of your results without having to be having people roll a whole bunch of dice. Or if rolling two dice gives you numbers 2 to 12, but you wanted a different range, you could just have them roll a whole bunch of flat sticks and then it lands one, zero, one zero, and you still get a nice bell curve. So that's an idea you can use in your game as well or you can use things like spinners, which pretty much just give you the equal chance no matter which side you land. And you can make a spinner of arbitrary complexity. You could make a 20-sided spinner. It will look basically like a circle but you could. So one thing that Parlett introduces is this idea that games of chance are also some of games of imperfect information. Games of perfect and imperfect information, what's the difference between the two of them? It's the first time we're encountering these terms in this class. But I wouldn't want to hazard a guess. What's an example of a game of perfect information? AUDIENCE: Chess. PROFESSOR: Chess. That Is a game of perfect information because? AUDIENCE: Because all of the information you are going to be using in that game is right there in front of you. PROFESSOR: Right. Players see all the games state, although Parlett says, but they don't know and one of them got out on the wrong side of bed that morning. They don't necessarily know how experienced each other is, and that's kind of weird is that part of the game was the outside of the game, right? And other examples of games of perfect or imperfect information? AUDIENCE: Go is imperfect information. PROFESSOR: Imperfect, which would-- AUDIENCE: Poker would be imperfect. PROFESSOR: Poker would be imperfect because you-- AUDIENCE: You don't know the deck, you don't know what's in your opponent's hand, stuff like that. PROFESSOR: Does poker always have to be played with a single deck? All 52 cards? AUDIENCE: You could even not know how many cards, how many decks there are. PROFESSOR: OK, you may not know. So that's a question too. When typically we play it at home, it's just one deck of cards of 52. You know that you're holding the King of spades. There's no other King of spades out there. But you play in some casinos, and they've got a five deck mix, there could be two kings in spades. Yeah? Poker is all about trying to figure out what other people have, and what's the chances. Actually, you already know what other people have, but it's all trying to read what's the chances that they've got something, plus what's the chances of your hand beating that? Because it's all the variance in poker, like Texas hold'em and everything, which have other kinds of cards that nobody gets to see, that everyone gets to see. So perfect and imperfect information is kind of like a sledge hammer to deal with this, because you're not quite sure. Like random numbers or random cards that you're drawing from the five card stack, or random numbers that you're rolling out of a die is a very, very different quality to that piece of information than a card that somebody is holding and not revealing to everybody else, because it that's a fixed quantity. It's something that was dealt in the past, and Parlett describes it as sort of past imperfect information. And then there's future imperfect information, you're going to roll it and then and only then is that information generated even though no one knew it. No one knows that number. So there's a scholar called Celia Pearce over in Georgia Tech, and she came up with a taxonomy just to help her discuss this with her design teams, and I thought it was really, really useful. So instead of saying imperfect and perfect information, we start off with what the obvious one, which is public information. Games of perfect information and games of-- all information is public, everything is public information. The state of a board in chess or in checkers, or probably fun computer games where this is the case as well, although multiplayer computer games-- but nothing comes to mind-- a of computer games take advantage of the fact that everyone has their own screen to do imperfect information. So when you start jumping into the realm of imperfect information that's private, which is the stuff that one person, at least one person knows for sure, possibly, if you're playing a team game, there's more than one person who knows this information for sure. If you're playing a game like bridge, where you're sort of communicating, but in codes-- if you've been playing with someone for a long time, you can get to the situation where even though I can't see your cards, I pretty much know what cards you have because you're my teammate and you're trying to communicate with me in a way that I understand. There's random information which is stuff that you get from, say, drawing from the top of five stack deck of poker cards, or something that you get from rolling a die. Stuff that you can't really intuit, what Parlett calls future imperfect. Things that will be thought of, for all intents and purposes, that information introduced the moment you trigger that rendering-- that randomizing method. The moment you throw the dreidel, spin the dreidel, roll the dreidel? It spins, OK. The moment you throw the knucklebones, whatever it is, that's the moment when the information gets generated. There may be some probability in how you expect this to work out. Like if you're rolling two dice, you expect number 7 to be the most likely outcome, and number 2 and number 12 could be the least likely outcomes. But that information doesn't really exist until the moment it's generated. So what's missing? AUDIENCE: There's information that's computable out there, so, for instance, when you're playing cards or something like that. I know technically that the rules are you can't see the round that's in play, but there's information which, if you were at a computer you would remember this information, but oftentimes it's a fact that you can't actually remember it. PROFESSOR: The stuff that's hard to remember, the stuff that like-- let me see if I'm hearing you right. So for instance, the game off blackjack to a card counter is a very different game of blackjack for someone who sits down and plays for one round. Right? Because the game of blackjack for a card counter is aware that this five stack deck of cards has had a certain number of cards played out of it. And knowing, being able to remember how much has he played out of it has changed the actual percentage of what could come out. So what's been played before is technically public information. So previous rounds that were played out in the open, if you were really, really paying attention you could have seen that information. It's hard to remember, but you could have if you were a computer, if you're a video tape recorder, you would to be able to see that information. But then there's all the other cards left in this stack that's already been shuffled, and it's kind of in a fixed order. It was randomized before they shuffled it, so then not it's been shuffled and it's just sitting on a stack. That's what Pearce calls hidden information. It doesn't even have to be like a five stack deck, it could be just the three cards that were dealt face down at the beginning-- is it three cards for Texas hold'em? Two? OK, the two cards that are dealt face down in the center, nobody gets to see that until right at the end. But they're dealt at the beginning of the game, like in the second hand or-- AUDIENCE: The public? The public cards? Oh, that's three. PROFESSOR: Three, right. And they're down, right? AUDIENCE: After the first round, they're face up. PROFESSOR: After the first round they-- Yeah, that's right. It's a long time since I played hold 'em. So three cards, they're face down, but they've been dealt. So the randomizing, they have been triggered, the information is there, but no one knows it yet. AUDIENCE: In other words, not really different from random information. Like the difference between like, I haven't actually dealt the card yet, and I've dealt the card and no one's sees it. It seems like hidden information that no one knows is equivalent to random, like it's hidden information that's equivalent to random information. PROFESSOR: Not quite. There is a different quality with it, because that variable can no longer change in the rest of the game. Maybe the Texas Hold em one starts to get a little blurry, and it's-- AUDIENCE: But random information, like-- it's like players can treat-- like you can't treat it any different. Just because like if for some reason we've decided that we were, let's say I had a die-rolling-- let's say I wrote a die-rolling program and the random seed, and you are right-- no one knows what the next random seed was until what the next die-roll showed them. And you could say this is hidden information and not random information because the random seed i-- it already knows the-- PROFESSOR: Yeah, if you knew the seed and you knew the randomisation algorithm, then I would argue that information is no longer random. That's why they call it pseudo-random. It's not actually random anymore. AUDIENCE: But it's random in the sense that no-one-- it's random with regards to players so that players can treat it as random information, because they don't know how-- PROFESSOR: You can treat it as random information. AUDIENCE: So if you deal two cards out, face down in the deck, then at the time when you're dealing the first one, it's random? AUDIENCE: Yeah. AUDIENCE: But then the card that's the first one is a card that's already in play, and so when you take the second one, that card has a smaller phase of possibility. There is a specific card that that second one can't beat, the first one. AUDIENCE: But we don't have to see it that way, we can just-- AUDIENCE: What I'm saying is that if you want to-- if you deal one card and you want to reproduce the exact same situation elsewhere and then continue dealing from the deck, then maybe you're going to shuffle the deck before you deal a second card, and then that new card is random-- the second card-- except it's random out of a smaller possibility. And it's not just random because there is hidden information that the-- PROFESSOR: That's one other thing that you're not taking into consideration, and that's the fact that before-- after these three cards in Texas hold'em are dealt out face down, there was no a chance to look at it, but everyone's gotten a chance to see some of the cards that already have been dealt. And that has already changed the likelihood of what that card could be down, because-- AUDIENCE: Just because something isn't perfectly uniformly random, chosen from the distribution of everything, doesn't mean it's not still random. So what I'm saying is just because like-- so if I deal a face down cards, and then I throw a face up card that's the five of diamonds, even though I know it's a face down card, it's not the five of diamond, it's still random because it's now drawn from an equivalent of random information, yeah. AUDIENCE: I mean, this is perfect Schrodinger's cat, isn't it? PROFESSOR: No, it's not, because what happens is that you keep getting new cards throughout the entire game, right? And these cards slowly get revealed. AUDIENCE: I see that there is a distinction made between random and hidden for the purposes of the classification of games. There's probably a useful reason for it. I mean, sure, you can think of it as like now there's pendence between different cards and things like that, was probably just there for a useful way of thinking about games. AUDIENCE: What I'm saying is I don't think it's useful, and I think it's a kind of predicament. I think that there's different relationships. PROFESSOR: I'm not talking about statistical difference. I'm talking about what the players think about these three cards. They know the three cards are not being re-dealt every single hand. They know that no one can say, for instance, like someone can take the deck and shuffle it in the middle of a poker game. No one complains about that. You're allowed to do that. You can't re-deal another three cards. You're not allowed to. AUDIENCE: It doesn't change the probability distribution. PROFESSOR: It doesn't the probability distribution but you're not allowed to. The rules do not allow you to re-deal those three cards. There is a different weight that the rules have placed on those three cards that have been dealt saying those things are now fixed, whereas everything in that deck can be re-ordered. Your hand and your hand. AUDIENCE: For a different example, is the game Clue, where you have like this one item, the one person and the one room, and that is the way to getting it. In some senses it's random, but it determines who wins the game, and so in that sense it's like you wouldn't think if it as random necessarily, and while your rolling to move around and discover these things would be random. PROFESSOR: And that is actually also statistically different example, but a much clearer example of hidden information. Because that one, as you play the game, you are getting more and more and more information about what those cards could be. In fact, that's the whole point of the game, right, is trying to figure out what's in there. So in that particular case, that's hidden information that's random at the point when it was generated, but after that it is-- but as soon as people start playing, start making accusations, it's no longer random, it's just hidden information. So yeah, Clue is a much better example of that. Maybe the three cards and the Texas hold 'em are different. I still maintain that the quality of these-- they may not be statistically different but they are qualitatively different, because you can never re-randomize those cards according to the rules. AUDIENCE: It's something players think about, not because they're-- I think the reason different players think about them differently is that-- PROFESSOR: Well, that's-- AUDIENCE: That's not true. That's just not true. The cards are down and they're down. You cannot un-put them down. So qualitatively, they are completely different. AUDIENCE: When they're exactly the same, did you re-deal-- [INTERPOSING VOICES] PROFESSOR: OK, he had his hand up first, so let me get with him first and then we'll get back to you. AUDIENCE: It's actually also statistically different, because let me just simplify the poker example a lot. Let's say that we take one card from the deck and put it face-down, and then we flip a second card so that everyone sees it. And then, let's say someone gets it into their head to take that one card that was hidden, and shuffle it back in and deal a new one, that actually does-- The two situations where you do that and where you don't are different, because in the first situation, the card that was hidden had a equal probability over all 52 cards and followed-- and when you get one piece, the extra piece of information, after that your adjusted probability distribution is equal over the 51 cards. However, if you reveal that one card and then take it back, then if someone-- you don't necessarily have that card visible to everyone, for example-- PROFESSOR: But it's true that the example-- AUDIENCE: Two or three cards, it would be the same. PROFESSOR: Yeah, you are describing a situation where someone's actually had a chance to actually look at that card, face-down, yeah. And that does change things because that becomes private information to someone else. So that takes it kind of outside of the realm of example, but yeah, I do agree that statistically there's not actually a huge amount of difference between the cards that are face-down and the cards that you're drawing off the top of the deck. And then get your hand and then your hand. AUDIENCE: So the problem I have is first of all, statistically, there's no difference. Not a little difference, there's no difference. And so as a player, you sort of think about cards that are faced a little bit differently than cards that are in the deck. But if I'm playing to win, ultimately I have to think of them as mostly random. PROFESSOR: Sure. AUDIENCE: Like I have to think of them as what would probably be distributions of the cards. PROFESSOR: Strategically, you could, it's probably advantageous for you to think that they're the same statistical, difference because they are. And I'm not disagreeing that statistically they have the same likelihood there. I'm saying that in terms of how the rules are worded, they are very different. AUDIENCE: OK, so does the use of this definition come in when you're like just writing the rules? PROFESSOR: When you're designing a game. Not necessarily when you're playing the game, but when you're designing the game. AUDIENCE: One thing to think about is is when we were talking about that clear example, when you have something that's hidden, you can possibly gain information about it that will be useful, right? You gain information about what that Clue card is. There's no way you're ever going to gain information about what the next roll of the die is going to be, unless you're doing some kind of major statistical analysis. AUDIENCE: But that's possible. PROFESSOR: For roll of a die, yeah, I mean-- AUDIENCE: But it's still random, there's still a probability distribution over what it is? AUDIENCE: Not really. AUDIENCE: No, if you're rolling the die-- AUDIENCE: I'm not going through that. It's just theoretically possible that you could compute what the die roll is going to be before I roll it. PROFESSOR: There's one very clear example when you can compute it, right? Because you know those dice are loaded. Yeah, you know those dice are loaded, and in that particular case you've got private information just in the same way that if you're using, say a pseudo random number generator. AUDIENCE: But if you flip a Clue card more than once, it doesn't change. And if you get information about what that is, it stays the same, right? AUDIENCE: Do you think that the Clue example is an example of private information? AUDIENCE: No. AUDIENCE: Is it an example of private and random information, it's not really like-- PROFESSOR: No, it's important to remember, to know that isn't information because no player actually gets to see what that card is when it is inserted into the envelope. For those people who haven't played Clue, what happens is that you've got a whole stack, you have three different stacks of people, places, and things that you can kill people with from the 1950s or something. And then you get one-- three-deck card shuffle, you pick up one card for each, and those three cards go into an envelope that nobody gets to see until pretty late in the game. And that those cards will never change. That's a very clear example of hidden information because at the point when they were generated, you're right, that was random. But then it gets turned to this piece of hidden information that everyone is trying to find out during the game. The whole point of Clue is that you're trying to figure this out. And there are rules governing who gets to see that hidden information, when, there's rules governing how you generate that information, and there are mechanics for you to be able to figure out what's inside there. Let me see. There was another point that I was going to make about that. And it's true that if you, say we implemented that on a computer or any game that has some sort of randomizing thing in a computer, and you could figure out how the randomizing was being done because very few things in a computer are truly random. Then you will be able to, say figure out OK, the next die roll should be a certain number. The other day I talked about how when you play certain strategy games, and you had like a 50% chance of succeeding at something, then technically if you save the game right behind-- right before you did it, and you kept doing it, then you will eventually have 100% chance of succeeding, because you just keep playing it until you win. Unless, of course, it's a pseudo random number generator that is saving the seed, and so the outcome always turns out to be the same each time. And if you have that information, that's actually private information. The question is, is that actually going to be something that you're designing into your game, or something that you just choose to live with? And as a designer, you get to make that decision. You get to decide things like, I'm going to save the random number seed when I save the same game. You can decide to reveal what you're randomizing scheme is to the rest of the world, and to a hardcore computer hacker, it wouldn't it be that hard to figure out what kind of method they're using. You can choose to say no, this is a very closely guarded secret. I'm actually generating it by analyzing this Geiger counter, and seeing when certain ticks come, I mean that's as close to random as we can get. You can choose to do all of that as a designer, and the question is where do you want to set those limits for your own game? Things like, for instance, yeah theoretically, a pair of dice can generate any number between two and 12. But anyone who has played games for a certain amount of time realizes that 2 and 12 are really rare compared to 7. And there are games that take advantage of that, and there are games who don't take advantage of that. There are some games that basically treat a roll of 12 and a roll of two exactly the same as a roll of 7, especially like the race games. You move that number of steps but they're not like balanced so that 2 and 12 are expected to come up less often or more often. That's why you-- I've seen versions of things like Candy Land, where they replace the spinner with dice. And it's like, wait a minute, that completely throws off the likelihood that-- the way how the pace of the game works. But for Candy Land it doesn't matter. So you're kind of equally likely to win. So for that game, that's something where you can do those types of substitutions. I think it is actually very, very useful to be able to talk about this when it comes to the design of your own rules. It's useful to be able to talk about it with other designers. Say he has this piece of information that we're storing early on, maybe because we got cards that are flipped down, maybe you got cards we're putting it an envelope. Maybe just a quantity of things in a bag or something, and then you have to reach into it and pull things out, like the bag in Scrabble. Things that sort of randomize in a bag, the actual which one you're going to draw next from it is somewhat random, based on some sort of probability. But the possibility of that could be public information. It could be hidden information. You might not know what the distribution of stuff inside that bag to begin with. And that's why I want you to be able to use these words in describing how things are working in your own game, to your own teammates, and especially to us. Because we want to understand what it is that you're trying to achieve with this bit of information. Hidden information typically is something that you're going to try to allow people to figure out over time. And that's a lot of that's a lot of stake over what that hidden piece of information is. Whereas that's not a whole lot of-- and it's kind of like shared among everybody who's looking at a piece of information. Whereas something that's random is usually, kind of at that moment it's really important, and after that it's not so important anymore. It has some sort of consequence that might be recorded in the score, but it's not like from turn to turn that particular role or that particular spin was all that important. Private information and public information tend to be pretty easy to talk about, so they're just useful terms to get people to discuss with each other. And so I just want to introduce these terms, this is the taxonomy that was provided by Celia Pearce. It sort of expands on the perfect and the imperfect information because imperfect information gets kind of fuzzy because it has all these different versions. And now that I everybody who's planning on showing up today has shown up, we should probably start setting up for play tests. So is everyone sitting in teams? I think everyone's more or less sitting where you were, right ? Any questions so far on other questions based on what are we talking about? Like to clarify why I think these should be used differently. How many of you have changed your game mechanic by the way? What would you describe it as now? AUDIENCE: Yeah, maybe deception. PROFESSOR: Deception. AUDIENCE: It's not built. PROFESSOR: OK, something like laughing, something like faking out your opponent. Everyone else still pretty much sticking with what you've got? Is that still building? AUDIENCE: You're building like a board. PROFESSOR: You're building a board. I haven't see that part of the game yet. AUDIENCE: Well, not building-- you're building on the board. PROFESSOR: I think I need to just play our game to get it. Because it looks like [INAUDIBLE].. All right, so again, can I see which group has two-player games? 1, 2, 3, OK. And the rest? Who has four-player games? And you all have the be three-player games? OK, all right. So, wait, hold on, both of you are two-player games and then you played each other's games last time? Oh, you didn't? OK, all right then. Then that's easy, the two of you should be playing each other's games. Again, there should be observers who have recording material to be able to-- AUDIENCE: Is there a team of two playing the game, though? PROFESSOR: No, don't play simultaneously. AUDIENCE: OK, got it. PROFESSOR: So that team comes over here, plays this game, then everybody here goes over there, plays that game. There's probably enough time for that. AUDIENCE: Would it make sense if it all had two-player games on one set of tables? PROFESSOR: I think it would help if just the two of you just came up here and brought your game with you. And then you can just like hang out here because-- we're only going to end up playing one of, either you're team's game or their team at one time. They're not going to play both games simultaneously. The rest of you, let's see. How many of you have your game ready to go? AUDIENCE: What was your question? PROFESSOR: How many of you have had a game ready to go? All right. I've played your game recently, you haven't. AUDIENCE: I've not played this one. I've played an old version of that one, and I've played the one in the corner yesterday. I haven't played those two. PROFESSOR: OK. [INAUDIBLE] Have these two groups played each other's games yet? AUDIENCE: Probably. PROFESSOR: You have? Then what I'm going to suggest is that three people from this group come over here and play this game first, all right? And then Rick and I will join these two to play the middle game, and then the four of you will play that game. AUDIENCE: I've played that game already, though. PROFESSOR: Oh, you have? AUDIENCE: Yeah. PROFESSOR: I think we can-- all four of you have played the same game? AUDIENCE: Yeah. PROFESSOR: Has it changed dramatically since the last time you played the game? AUDIENCE: Not really. AUDIENCE: How many tests of each game are we trying to get to? At least 2 play threws for each game? PROFESSOR: Just one, actually. I'm pretty sure that after one they'll be things to change. And if we have time, maybe we can get one more. AUDIENCE: Let me do a quick scan to see what other staff are available, and we can get-- PROFESSOR: Let's see if we can get a few more people to play that game over there. All right, so let me just change this up then. Three of you come over here and bring this game right now. OK? For the presentation at the end of assignment one, I do want you to talk a little bit about the game but don't explain the rules of your game before doing your presentation. That will use up all the time that you've got. Talk about a process, talk about what you started with, what you tried, what didn't work and what kind of changes that make you make. You know, what worked and how you rolled with it. I want to hear more about the story of how you got to where you are, because right after you do the presentation, we'll actually sit down and play each other's games just like today, only it's not a playtest so you don't have to do like recording or anything, everyone just plays games. AUDIENCE: How is the presentation? PROFESSOR: I think only five minutes. It should be in the syllabus. I thought I wrote five minutes. It's pretty fast. It doesn't have to be one of those like every single person on the team has to say something kind of thing, but if you want to organize it that way you totally can. I just want to hear about what is the story of the game. The other thing is that I actually did look up the situation with Texas hold 'em, but I am going to reverse my position. That is, I think, not hidden information. And two things swayed me. One is I started discussing it with a bunch of other game designers online, and I discovered that there are a couple of variants of Texas hold 'em, where you never deal what they call the flop face down. You just play it face up one at a time. At this point, it's just a random view from the top of the deck and that makes it-- it started to be the same either way, only just because they're listed, right, that it's basically a random deal. So yes, that is in fact random. That's a very bad example to use for hidden information, whereas the Clue thing is because the whole point of that game is that it gives you all these tools to be able to reveal this information. At that point in time where you put your cards inside it is random but then once it's inside and a person starts that's hidden information. Thank you very much.
|
MIT_CMS608_Game_Design_Spring_2014
|
24_Indie_Games_and_Aesthetics_with_Jesper_Juul.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I'd like to introduce Jesper. Jesper Juul is a old friend of the lab and currently at the Royal Danish Academy-- So it is very long. JESPER JUUL: It's actually longer than this. Royal Danish Academy of Fine Arts Schools of Architecture Conservation and Design, School of Design. PROFESSOR: Right. And you've also taught at NYU. You're a visiting professor here. And you've been a games scholar for a decade now? JESPER JUUL: Yeah, a bit more than that. PROFESSOR: Maybe it's more than a decade. If you take a bunch of the classes here, we sometimes have readings from Jesper-- articles that he's written. But this is another example of the talks that we've been having on this is what people do once they understand game design, right? Some people study games. Some people do research and write books about it. And ask interesting questions that try to find out more about what makes games fascinating. Or what makes the environment in which we create games or we play games interesting. So with that, I'd like hand it over to Jesper. JESPER JUUL: Hi. PROFESSOR: And this is game design 608. JESPER JUUL: Hello, game design 608. So are you doing digital games, or analog games, or? We're doing analog. JESPER JUUL: Analog, all right. So I think I'll mostly be talking about digital games right now, but I think it probably applies to some extent. Phil, introduced me so nicely. So this is really a part of a conference presentation I did earlier in April. And so if you're interested in reading the full paper, it's on my website. And it's kind of URL or independent style. And if you just go to /text you'll see some of the other stuff I've written. And I've written a few books about video games. So I wrote one about video games and storytelling. And one about casual games. And my most recent one, called The Art of Failure is, in a way, about being a sore loser. So it's an essay on the pain of playing video games. So I focus on the question of why we claim we enjoy video games even though if you look at people who claim that, they often look quite unhappy actually. So it's a discussion on that. But this one is a social study of different things across the culture of independent games. And so, there seems to be a kind of consensus that independent games would become a important aspect of game culture. And you can see this in several ways-- with a very cheap kind of Google engram way of showing that something is popular. I don't know if anybody knows why it's never flat, this curve. Does somebody know? So in economic game theory, there's actually a concept called independent game. So it means a game that's not attached to other games. So that's why it's never quite fled the curve. But it shows you that, from 2000 on, people started talking about independent games. And you can also see that somebody at Microsoft, they could spend a good deal of energy to claim how independent game friendly they are with the new console. [INAUDIBLE] And we can tell by-- probably a few different reasons why we talk about independent games. And one of them may be the budgets have become too big and too late development. So there is this kind of opening for games that are made on a smaller budget. And also that it's become easier to distribute games. So it used to be, in order to distribute a game, you had to find a disc and a box. And that's no longer true, fortunately. And this means it's become possible to make games in different styles. And they also found that there's this thing that-- which can sound kind of weird-- the independent games, in a way, pre-supposed the idea of independent games. So let me explain what that means. If you're making a game on a small budget, one of the things you need to be able to be sure of is that people who see your game, don't just think of it as a game with too small a budget-- that they actually understand that this is a game that has some positive qualities to it. Or that there's a particular reason why it has a small budget. Or the fact that a bigger budget wouldn't have made this game any better. And so I think that the idea of the independent game is really that idea that you can actually say I'm making an independent game. And people will understand that this is a game that has a particular set of qualities where it's a feature-- the fact that it's a game with a small budget. And so, it doesn't mean-- you can see the question, what makes people assume that something is an independent game when they see it? I don't think it's just that it says independent game somewhere. But I do think there's actually a particular style that's come into independent games. [INAUDIBLE] JESPER JUUL: Essentially you see that when people write about independent games, the first thing everybody will say is that you can't define anything. It's the first thing you have to say. So you always have people say that you can't define it because this creates discord in the game community. Or you want to talk about independent in different ways, economic, technological, or cultural status. And then the third people say, well, there's no point in trying to define independent games in the first place. So I think that's actually [INAUDIBLE] On the other hand, if you look at what people who make independent games say there's actually a lot of things in common. So Dan Cook talked about that independent games largely favor someone who's authentic and deserving. Edmund McMillen liked to talk about honesty and speaking from your heart. He made Super Meat Boy, and Dan Cook made Triple Town. And [? Robert ?] [? Aumann ?] talks about your personal relationship with the work-- to independent games. [INAUDIBLE] [? Chavez ?] a whole discussion comparing independent games to funk music-- small, kind of personal, anyone can do it kind of thing. And various [INAUDIBLE] has talked about the smaller budgets allowing games to be more personal, more relevant. And so I think these are quite similar in some ways. So they talk about the honest and the traditional and, what I call, minimal complexity. You can understand who made the game. Like we talked about just before with Kickstarter-- if you buy Triple-A game it's not necessarily clear who actually made the game. And with independent games, there's much more of an idea that you will have a feeling of the author's-- of the creator's-- personality. All right, so you see that these are what I call moral and aesthetic claims at the same time. So it's not just saying that these are better games. It's a bit more deep than that. It's also saying that this is a better way of making a game. So not just that it will be a better product, but that, even say, the quality of life of someone who makes an indie game will be better. But also that we will all be better off if more people make independent games. We'll have more communication, and more ideas being spread, and more diversity, and so on. Now, the thing is this is not exactly new. So this is called the trellis wallpaper by William Morris and Philip Webb from 1862. So one of the things that strikes you as indie is that it actually is quite similar to somethings that have happened several times. But I think particularly it happened with the arts and crafts movement of the 19th century. So you can see when we talk about independent games I do think it ties into several things such as the idea of DIY or the maker movement or the idea of local food production, like the locavore movement-- this idea that if you go to a restaurant you should know where the chicken you ate actually came from. Only yesterday I was at this restaurant called Emu-- I didn't know [INAUDIBLE] and they did list, on the left side of the menu, all of the farms where they get their ingredients. And it has this kind of like, wow, isn't it amazing. And of course I've never heard of any of those farms. I have no idea where they are. But it kind of still works for me because you can understand that there's a kind of honesty or something authentic about it. But the arts and crafts movement, specifically discussed as being this late-19th century movement, where people reacted against the industrialization and machine production. And they felt that this included a loss of quality or personality. So that you didn't even know who made the product. And the product itself would be, kind of, worse. And then the proponents of John Ruskin-- he talked about this idea that in medieval times there was a much better-- you had the [INAUDIBLE] by the medieval guild, kind of a small group of people making something like a Gothic building. And William Morris talks about this idea that handicrafts-- the craftsperson making something that's much better than what would be made by machine production. And again, not just that. It would be better products, but that society at large, the world, would be a better place if we made things that way. And I do think you can say that-- I think it's very clear, with the idea of independent games. It's kind of similar in the sense that when they talked about the revival of the craftsmanship and handicrafts, that we actually get a big machine, or big corporate productions-- a mass production. And I think, very similar, you can say that independent games are a reaction against very large Triple-A production teams. And so we have this idea that we should return to the smaller productions, and this will make everybody-- everything better on a number of levels. Does that make sense? I don't know, do you have strong feelings about independent games? AUDIENCE: An independent game that you get [INAUDIBLE] AUDIENCE: I do completely [INAUDIBLE] I think independent games now have more freedom to explore different mechanics and step away from what sells because they don't have the million dollar investments that they have to make sure it sells. And they don't have to be like, oh, it's been proven to do well with the market. And they have more freedom in that way. [INAUDIBLE] JESPER JUUL: Yeah. AUDIENCE: So I found that, just for me personally, I don't really play games unless I'm planning on playing them a lot. So I play competitively League of Legends or [INAUDIBLE] Brothers. And some of the problems I have with independent games is that I don't feel like they're as developed. They're generally on the more creative side. Which is definitely something appreciable but not something that I enjoy. I prefer getting very good at, physically, mechanics and such. JESPER JUUL: Yeah, I guess, [INAUDIBLE] multi-player actually does not [INAUDIBLE] adaptable. There's this new collection out called [INAUDIBLE] Does anybody play that? AUDIENCE: [? Bobby ?] [? Pinchot ?] [INAUDIBLE] AUDIENCE: [INAUDIBLE] [INTERPOSING VOICES] JESPER JUUL: Yeah, that's true it's not, in a way, a competitive sport. It's not that at all. There's a game I like, [INAUDIBLE]. Yeah. So that's kind of part of that complex. AUDIENCE: So kind of on the opposite note, I think, not all, but a lot of independent games have the ability to just be quicker in single-player mode because-- While not all competitive aspects might be not be there, because massive multi-players [INAUDIBLE] They can be hard in single-player mode because they don't, for example, they don't need to remake the investments up. So a lot of people are frustrated when they lose. But they explore that realm more readily. JESPER JUUL: I do think it's also a kind of nostalgia for an earlier time. And I do think that it's also part of the question of difficulties. So it's a part of it in the sense that all of the edges are being removed, or the challenge is being removed from big game productions to please everybody. If you make an indie game, you can make something that's more harder, more focused, and have things like extreme difficulty. I think that's certainly an argument that people make. AUDIENCE: Actually I have a question. Does anybody remember the phrase, Nintendo hard? AUDIENCE: Yeah. AUDIENCE: OK, because that's not what Nintendo is anymore, right? AUDIENCE: Not really. But on the flip-side to that coin, aren't you sort of breaking the number one design rule, flexibility? When you make games incredibly difficult such that you're carving out a very small niche audience and saying, screw you to everybody else. JESPER JUUL: Yeah, and so these are subjects we'll get to in a bit-- that the arts and crafts movement also has this political theme that it would be for everybody-- art by the people, for the people. But then the criticism is that it ended up making conspicuous consumption for rich people. And you see the same thing with the locavore food movement. It's usually pretty expensive actually. And so it's something that happens, that in a way you cannot have these ideals of broadening things, this kind of production. But often it also [INAUDIBLE] flip-side of actually narrowing it and making it an elite object for [INAUDIBLE] One thing I thought was interesting was anything common in the way-- just talking about the visual style in independent games. And he had three games that we often talk of as being independent, and [INAUDIBLE], which has a pixelated or large pixel style and yet it moves this torn paper and crayons [INAUDIBLE]. Obviously, children's drawings with crayons. And so on one hand these are different graphical styles. But actually, you can see what they do have in common is what you can call a double layer. It's a representation of a representation. So the [INAUDIBLE] represents 1980s size pixels which then represent a game world. And yet it moves with the sense paper which then represents a game world. [INAUDIBLE] the sense experience which then represents a game world. And compare this to a modern Triple-A game. You don't really have that. You have 3D graphics within the game world. So when can say that-- I think that's pretty common. And I think without a representation of independent game-- I think when we usually see a game and recognize it as being an independent game, it's often because you have this type of style-- a representation of a representation. And often we'll have to use something from contemporary technology, obviously. But you said, to emulate something that's low-tech and cheap, right? So you can see that-- you can compare this to some casual games like matching games which might signal jewels or diamonds or something. I think it's very clear that most independent games tend to emulate very cheap materials like torn paper and things like that. I think also the reason why people do this is that, in a way, what you've done is that you're signaling that we have made a deliberate choice to have this style. And we have deliberately chosen to make a game on a small budget. So I think this is [INAUDIBLE] but I think that's what this kind of style signals. And it's also signalling this thing of [INAUDIBLE] authenticity, or knowing who made the game, or transparency in the production process. And so you can think of it like this, that indie is using this [INAUDIBLE] to mean two different things. Indie, on paper, means a financially independent team. Then I think that's also [INAUDIBLE] that people talked about indie from the game-development community-- talked about it in a way that it was morally and politically and aesthetically better. Not just better games but also better for everybody when games are made this way because you can communicate values and ideals, and so on. And that indie has a particular kind of style. And I think that this kind of style of having these kind of representation of a representation, is one that people use to signal, now we are making a game with a small team. And it has this positive value of being authentic and something that we can figure out what's going on on who made it. I'll just show quickly how that kind of style appeared. So this is looking at the-- do you know Independent Game Festival? Did you follow this? So that's all right. So anyway, this is a game dealers conference every year since 2000. There's been Independent Games Festival. And this is the longest running festival of independent games. And this is a jury-based competition. And so one of the things that's interesting is looking at the winners of the grand prize in this Independent Games Festival. And one of the things that's kind of odd is that if you look at the first five years, none of the games actually signal independent very well in a contemporary way. So you can see that the three of them are somewhat regular games of armed conflict. And then Wild Earth is a Pokemon snap-style game. The Bad Milk is this weird, never released associational CD-ROM. You click on things, and then other things appear, and so on. It never came out. But you can see that this feels like it's from a different time. If you see the picture of the two top games, there's nothing that signals independent game in a modern sense. And I think part of this is because at this time online distribution just wasn't that big a deal. And so when people submitted to the festival what they did was sometimes probably just they hoped that they would get noticed by a publisher. Who would then fund them, so they could make a very big version of the game which could them be shipped on a disc. And then we see that from 2005 on, the winners of this festival gradually became more of this style I'm talking about. You see the paradigmatic 2-D platform with some kind of twist. [INAUDIBLE] Then with the winner we see a low-poly style that I think refers to a movie like Tron. And then we see various takes on water colors and the hand drawing graphics. And you see, this actually coincides with the gradual rise of digital distribution. So first there's things like downloadable casual games, that it becomes possible-- or those Flash games sites. It was more possible to distribute a game without having to put it in a box. More recently then we see, we still have this pixel style. But then that gets merged with different things, like in a game like Monaco it has these various lighting effects. And then in Minecraft and [INAUDIBLE] it gets moved into a third dimension-- so not that-- and this is what I call counterfactual nostalgia-- in a way it's pixelated as if there was a time in the 1980s when people would make [INAUDIBLE] games with big pixels. This never happened obviously. But we have this Steampunk anachronism about it. But still, you can see, it signals that this is a particular game-- or a particular style and then they used modern effects on it. In the last two years you have this flat mostly gray-scale pixels [INAUDIBLE] now. But then there are certain things that seem to be happening in the game play where half [INAUDIBLE] to me the life of a poor [INAUDIBLE] and pay the police simulates being an immigration officer. And so I think it goes to [INAUDIBLE] people talk about the moral current. Like there is a lot of discussion about participation, in various ways, in games and certainly a lot of emphasis on trying to make games with more serious themes so they'll cover a broader range of themes. And so you see-- so this shows you where this kind of style comes from-- the representation of the representation. You'll also see that basically every single winner since 2005 of this festival has had this representation of representation. All right. What does it mean? Well, I think there are a few contradictions in independent games. So one of them has to do with what we call the DIY movement. It's that anybody can make a video game now with independent games-- versus the idea of independent games as being a way for people to do games that are particularly expertly crafted. So some of those like Terry Cavanagh-- he talked about how it's easy for him to make a game with this pixel style because he doesn't consider himself that great a graphic artist. And so this pixel style is easy for him to do-- to make in a convincing way. On the other hand, if you want to make a game like that in a tool like Unity 3d. But people at Unity 3d really doesn't want you to do that. So the people at Unity 3d will do a serious filtering thing on your texture so if you just draw something with big pixels it will be very blurry, like the image on the left. So you have to do various things of changing the settings in the rendering of Unity 3d to actually do pixel style. That's not really what the tool is meant for doing anyway. So this develops the possibility of demonstrating technical expertise, by working against the intentions of the tool. And that is a kind of feature. And so I think certainly that's the conflict within independent games-- between making independent games being these very small games in which you develop and have the opportunity to show off how great they are at various technical skills in a delicate and very small system. Versus independent games as being something where it's open to everybody. So you can take a designer like Anna Anthropy, who wrote a book called Rise of the Videogame Zinesters-- How Freaks, Normals, Amateurs, Artists, Dreamers, Drop-Outs, Queers, Housewives and People Like You Are Taking Back an Art Form. So it's about making game development more democratic. And at the same time, when you see Anna Anthropy's games, she's actually a very, very good designer. So she makes these kind of games that are very-- she's good at combining elements from game history and to use them in a new way. The second element-- second contradiction I think is more on the use end. That, on one hand the idea of independent games tends to have this idea of democratization. It becomes games by the people for people. On the other hand, I think also that now that it's so common to play video games-- and more than 50% of the population actually plays video games on a regular basis-- then I think to some extent indie games can be this way of showing that you have a more sophisticated taste than the great masses, right? That if everybody played Candy Crush, then you can show that you are playing some obscure game from the Humble Bundle that regular people don't really under-- have learned to appreciate. Then it becomes more this kind of fine wine tasting issue. So I think that certainly you have break-out indie games like Minecraft, obviously. Which of course is a very, very broad hit across a lot of countries. It sold, what, 50 million copies? AUDIENCE: Yeah, mind-blowingly large. JESPER JUUL: Mind-blowingly large. And then of course still the developer, [INAUDIBLE] He still has this scruffy look to him. So [INAUDIBLE] they said he had stylists to make him scruffy so he keeps his indie credibility. AUDIENCE: He's [INAUDIBLE] JESPER JUUL: Yeah, but you can see this a ongoing conflict within independent games-- whether this was something that's supposed to be very broad, or whether it is a connoisseur thing. And I should say, I do think that you can actually see this as a result of the fact that so many people are playing video games. So it used to be that you could say, the fact that you play video games made you belong to a particular category of people. But now that video game playing is so common you need to-- people need to select a certain subset of video games to have an identity as video game players. And I think that indie games can be seen as a kind of response to that. It's something-- it allows you to feel that you have a particular place, right? So according to authenticity and the [INAUDIBLE] Richard Peterson talked about the idea of the authentic-- [INAUDIBLE] So one of the things he studied is country music. And he looks at how different people argue for various types of country music as being authentic in different ways. So it can be authentic in terms of who recorded it, or in terms of style, or various things like that. And so he says that authenticity works when people try to put in effort in order to make something appear authentic. So it's just like the restaurant that they spend energy listing all of the places from where they got their ingredients to make the whole menu appear more authentic and local. A kind of critical-- this particular kind of critical argument against the thing that you like is, to be experienced as authentic, something must be marked as authentic. And this makes it authentic-- inauthentic rather because something has been done deliberately. And you can certainly see all kinds of products, obviously, where you can see there's some kind of advertising agent. He has spent a large amount of time trying to figure out how to make that particular thing look authentic-- by choosing the right [INAUDIBLE] or colors, or making it appear like an old country store even though it's a big multi-billion dollar corporation. And so you could see this as-- you can think of-- this would be a critical idea of-- this would be a critical think you can say about independent games. But of course you don't necessarily have to make games with this particular style, right? And then test it in ways that something inauthentic about choosing a kind of style to seem authentic. And so I think it's also a bit more complicated like that, right, because, in a way, just the fact that it's possible to make a game and choose a particular style. That style is so interesting because it's actually cheap-- it is fairly cheap to make games with large pixel or scanned paper, or things like that. So even though you could say it's not-- it's something that people deliberately choose. And so it's not authentic in that sense, but it's still something that enables game development. It does solve that particular problem of how do you make a game on a small budget and make it appear as a deliberate choice, rather than just a game with too small a budget. So you can see that this is what that particular style does. I should say there are a few games that we tend to talk of as independent which doesn't necessarily match this style. So [INAUDIBLE] is a particularly interesting case because it does have this representation of a representation style. It's made to look at the painting. So in a particular way, it's not meant to look as somebody tried to off-hand improvise a painting or drawing. It's actually meant to look a bit like fine art. And so, you see, it-- in a way it has this thing of being a representation of a representation. But here it's actually supposed to signal this having fine and nice or sophisticated. And there is-- and I think this [INAUDIBLE]. You guys play [INAUDIBLE]? So [INAUDIBLE] is meant to be a kind of work of art with a capital A. And I do think this is why that particular style was chosen. I think it's a bit simplistic
|
MIT_CMS608_Game_Design_Spring_2014
|
21_Meaningful_Decisions_in_Gameplay.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today I've got quite a lot of stuff to go through. So hopefully we'll get to the actual game playing, but if not, don't worry, you'll actually get to play the games-- the games that we'll end up playing today are games from last year. A lot of them are actually the final semester-- the final project, which means they are not answering the same question that you're trying to answer with your very first project. But it should give you a sense of what the scope of this class actually is. Last week, a couple of you played the LEGO game, Block Party. That was a two week project, and that was just without staff. These were more like four week projects. And we'll design my students with larger teams and probably closer to the kind of game that you're going to end up making. I want you to keep in mind that you're not constrained to building games that look exactly like those. In fact, after you've got a chance to play some of the games from last year, I hope you actually have a chance to think a little bit about what did they not do. All these games are like this sort of board game. And you will see a lot of them are this sort of board game. You find out what this is. And you'll be able to do-- you can think a little bit about what would it be like to design a game that's not that. What if you just wanted to make your car game, for instance? You can. That's no reason why you couldn't. What if you want to do a live action game where you actually move around with your body, that sort of thing? That's totally doable. I really, really hesitate not-- try not to put any live digital components in your game. I know some people-- unless it's something simple like a timer that you can run off your iPhone. OK, maybe that's fine. By actually writing code, then the scope of this project just went through what I expected you to do. And the possibility of failure is-- go back please. AUDIENCE: Very much. PROFESSOR: But not failing the class. But your project just failing during runtime. So that's probably something you don't necessarily want to do. We had two readings today, one for last Wednesday and then one for today. So I want to-- I try to cover quite a bit of it from the first reading during class itself. Was anyone here not here on Wednesday? OK, all right. So we need to make sure that your name's on the attendance sheet. And you should come and talk to me after class. And we'll make sure you get a copy of the syllabus and everything like that. Make sure that you get expectations for this class. Everybody who was here, hopefully you found your name on the attendance sheet because I had to manually add a few of you. So I hope you got-- I got it all right. Now one of the points that I try to get across-- is this recording? Yeah. One of the points that I tried to get across on Wednesday was-- all right, who's the most important person when it comes to actually playing a game? AUDIENCE: (COLLECTIVELY) The players. PROFESSOR: The players, right, not the designers, not you. But you when you actually play someone else's game. Then you're the most important person. If that person's experience is problematic or exciting or engaged or outright hostile to other players, or something like that, that's making it the way that they want to take it. And then you as the designer are are going to run through a number of different challenges trying to be able to give them an experience that you're trying to create for them. But you also have to acknowledge that if somebody wants to take your hardcore strategy game and turn it into a lighthearted party game, or vice versa, that's their prerogative. And if they say, hey, I really, really wanted this to be a lighthearted game, and you gave me this hardcore game. And it made me hate the people around me, the table. And I don't like it, that is fine. It's OK for them to not like your game based on my criteria like that. However, one of the things that comes up in the Brathwaite reading is this concept of meaningful decisions, right? Well, what's-- Sid Meire how many of you heard of Sid Meier? Yeah, what games can people remember from-- AUDIENCE: (COLLECTIVELY) Civilization. PROFESSOR: OK, Civilization. Railroads? I think he also did the first Rollercoaster. Pirates? Yeah, yeah. Mostly a computer game designer, although if you actually play his games, if you like digital board games a lot, especially Civilization. And he has a code, which just says games are a series of interesting decisions. Not necessarily meaningful but interesting-- the decisions. And what are some of the ways that a decision can be meaningful? What are some of the things that might come up in the reading or occur to you right now? That a decision in a game can be meaningful-- what does it mean? What does it mean to be meaningful? Couple hands? AUDIENCE: When you make that decision, the game's status changes or code that meanders the change? PROFESSOR: OK, so when you make a decision, and then the outcome actually has changed. So the corollary is that if you make a decision and the outcome hasn't changed, it wouldn't terribly be meaningful. OK? AUDIENCE: That's what I was going to say. I guess a lot like doing something and, I don't know, getting money in a money based game. PROFESSOR: So getting money? So getting some sort of quantitative reward based on the decision that you've made. AUDIENCE: Also you can get sort of awesomeness award even if it doesn't affect who will win or not. It could be something really awesome in the game, and you make a decision-- PROFESSOR: So kudos or something like that. It's like, wow, that was an amazing-- like in football, for instance. That was an amazing play. It didn't necessarily complete, but it would have been awesome. But maybe it was an awesome defense or something like that. That could be a meaningful decision. How about if you decide to roll a die? Game state has changed, usually. Is that a meaningful thing? AUDIENCE: It could be if it's like a choice you have. You roll a dice in one direction of the game or do you not, and choose different that's meaningful. It's like, do I roll the die now, or do I wait a minute before I take my turn. That's not really a meaningful decision. PROFESSOR: OK, it's like, let me think about this. Then I'll roll it eventually. All right, someone's hand is up. I'm going to go this way. AUDIENCE: So if you had meaningful alternatives as well? PROFESSOR: OK, so again, if you had the choice not to roll the die, OK? AUDIENCE: I would say most games that the die roll would be meaningful but not a decision. So the die roll basically tells you what to do. You're not actually thinking about it. But it's still meaningful. You can still change the game state and the fact who wins or whatever happens to me. PROFESSOR: So the outcome of the die roll is usually meaningful, but you may not have had to decide to do that. (Starting with you, We'll go that way.) AUDIENCE: To go with that. In a game like Yahtzee where-- do you roll the dice you're trying to make room. But it's about the die rolling where it's like something not Monopoly. It's not really a decision. You have to do it every turn. PROFESSOR: Mm-hmm. Yeah, and that's an interesting thing. I talked a little bit about mechanics on Wednesday and the idea that if you think of a mechanic as something a player does to change the game state, the roll die-- the die rolling thing is a weird thing because game state is changing it's something a player does but a player didn't decide how to do that, right? The player is just told to roll this die. AUDIENCE: Well, they kind of covered [INAUDIBLE].. PROFESSOR: OK, yeah that's good. AUDIENCE: We can choose like a different type of roll. PROFESSOR: Oh yeah, again like Yahtzee-- which die, right? Yeah? AUDIENCE: I learned this definition of [INAUDIBLE] series decisions, like the card game War, for instance, or the board game Candy Land. PROFESSOR: That's a version of Candy Land which lets you choose which pile of red cards to draw from. But is that meaningful? A stack of randomly shuffled cards? AUDIENCE: Yeah, it has to do with [INAUDIBLE].. AUDIENCE: If there a random pile [INAUDIBLE].. AUDIENCE: If you shuffle them and take the top card, it's doesn't really matter. AUDIENCE: But if you do Chance or Community Chest. PROFESSOR: I feel you something else that you wanted to add on to that. AUDIENCE: No, I was just going to say. It just means that these things aren't games. And-- AUDIENCE: It's seems okay actually. PROFESSOR: The thing is that they might be games but that particular decision may not be the meaningful one. And it's possible that in the entire game, maybe you don't get that many meaningful decisions. That doesn't necessarily make it not a game. It might make it not a very good game, which is a different thing. That's a qualitative and often subjective experience, right? To a two or three year-old it maybe an awesome game. You get to play in a land of candy. A few more hands I thought I saw? AUDIENCE: Yeah, on that topic. I think there's a reason why people who over a certain age never really wanted to play Candy Land anymore because they realized that they don't really do anything, so they get really bored with it. For kids or some other targeted thing going on more than the meaningful decision, making it fun to actually just doing something and moving-- or winning cards that makes it fun. PROFESSOR: The thing that war and Candy Land let you do-- if, for instance, it gives a five-year-old a winning chance against an adult, right? That could be huge for a kid. It's like, wow, I can actually play this with an adult? Learning how to take turns is one thing that people have actually get good at the games like Candy Land. I can't remember if I brought this up last time. I believe Candy Land was invented to keep kids from getting polio from each other. That might be urban legend. AUDIENCE: [INAUDIBLE] PROFESSOR: Huh, it's true? AUDIENCE: It keeps kids indoors. PROFESSOR: Keeps kids indoors and try to get them from their-- AUDIENCE: --they're working with other kids. There's polio going around at the time it happened. If you stay indoors with the people you already know you know aren't infected, you're going to be OK. PROFESSOR: So, OK, that was-- AUDIENCE: With polio, you can't do much of anything. PROFESSOR: There's a huge inversion from the get out and get some exercise. It's a serious game. It has health benefits. Let's play in the land of Candy. I would like to go back a little bit to this idea about changing the game state, right? You make a decision in a game, and you've changed a game state. And it wasn't a decision-- it wasn't-- I get to try to roll from three identical dice. Which die do I roll, OK? All right, that's not real-- a real decision. Unless they are loaded and there not identical. But anyway, so what do you need to be able to communicate to the player that their decision actually changed anything? OK, actually let me flip that around. If you don't let the player know what changed the game state, even though the game state might have changed, is it that meaningful a decision anymore? You did something. Some numbers changed inside the system. It's good to affect how things go out later. But you don't actually know what happened. Has anyone played a game like this? Does that sound familiar? AUDIENCE: I feel it's very much potential to player and the system because a lot of times something like that will happen and whatever most people say, oh, nothing really changed. My decision didn't matter at all. Whereas someone that takes the game more seriously might actually realize something changed just from probabilistic analysis, something ridiculous. And also if you expect players to get very into your game, this is the stuff that you can leave with because they will understand it and figure it out on their own. And that has to be part of the game. Whereas if you expect players to pick it up and play it three times in their life and then move on, then it probably won't help them that much and you should you should probably give them more feedback socially. PROFESSOR: Yep, some other hand? AUDIENCE: I think, also, if you have an adventure game where there's a story or something. And you tell some guys to just not help the guy. And then he becomes an evil warlord later, there are some interesting things you can do with that. PROFESSOR: There's a long term consequences, so a lot of immediate consequences. Immediate feedback, which is a term that you brought up. AUDIENCE: Looking at-- there are certainly games where --to make an example. In Dominion, oftentimes you can draw cards that don't help you just pointlessly causing a reshuffle. But normally if I were to do this without thinking and thinking doesn't matter. Would it actually be slightly beneficial or slightly [INAUDIBLE] that actually do this? But unless you really think about it, you won't even notice that it caused any sort of change there. Thinking of the Hitchhikers Guide to the Galaxy game wherein-- there's a specific point where you must-- if there's a dog, you need to give it a sandwich. Otherwise you're doing it yourself 200 turns later, like hours later in this game. PROFESSOR: Right, and that hurts, right? AUDIENCE: What's the game called? AUDIENCE: The Hitchhiker's Guide to the Galaxy text adventure. AUDIENCE: Another really straightforward example is competitive games where you get only some of the [? stateness ?] information, like Starcraft or something. You can't really see what your opponent is doing. PROFESSOR: Yeah, it's interesting because usually not [INAUDIBLE] changing what your opponent does based on what you can't see. And I say usually because it does happen. Occasionally, your opponent sees something that you didn't see. And they changed their strategy based on that. I've played a number of adventure games that actually have this more as a puzzle solving thing. So it's not like 200 turns later something changes. It's like something changes right away, and you need to figure out what changed. So it's not immediate feedback. There's feedback somewhere in the world. But I want to say I probably didn't enjoyed those games. Maybe someone does. Something-- Do you want to ask it? AUDIENCE: Well, you have to be careful that it doesn't have too much complexity or some sort of unanticipated change that there's too steep of a learning curve to actually enjoy it, but you play it and it takes two hours. And you realize something you did, screwed you up, you might not want to play again. PROFESSOR: Yeah, it's like-- AUDIENCE: [INAUDIBLE] PROFESSOR: --I got the wrong ending because I made this decision five hours ago or something like that. OK, how about the alternative, when something-- my game state changes, and you're not really sure how your decision came up with that outcome. So not random exactly, but overly complex maybe. AUDIENCE: Well, you don't end up with a good mental model of how the game works. And so even though you make meaningful decisions, you don't know what the meaning is. And so any time you're making another decision, you may be able to select, actually choose in a way that you want to. PROFESSOR: Yep, OK, mhm. AUDIENCE: It prevents a chess style of approaching the game. Right, in chess you see the board and you-- a lot of the really good players will see X number of moves in advance. But when things happen, they just randomly appear. You have to react on the fly. You can't just plan out all of your moves. PROFESSOR: This-- reactive play. AUDIENCE: I think this one's a lot bigger issue than the other one because this could lead to a lot of frustration with players. If they can see the game state changing and know that they're effective but don't know how they're affecting it, I just feel as a player, that would frustrate you to no end. And that would make you think that everything you do doesn't really matter and that you're going to win or lose regardless of what you're doing. And it kinda turns it into a Candy Land game even if it's not. AUDIENCE: Mm-hmm, OK. AUDIENCE: [INAUDIBLE] the game-- occasionally I've played some word [INAUDIBLE] games that are hard to understand. And the mechanics aren't intuitive like Village. We sort of did stuff, but we had-- and then afterwards, we didn't really understand. We didn't understand how we were changing it, but it took awhile before you really-- you don't understand what's happening. And there are many games where you can do very poorly or very well. And it takes a while to understand exactly why you're doing very poorly or very well. PROFESSOR: So sometimes even success can be bewildering, right. Yeah? AUDIENCE: I feel like this sort of [INAUDIBLE]---- it's easier to deal with from a digital game and a board game because in a digital game, usually people have to take their turns and have to play out more slowly, whereas in a good board game, you change something and-- if you do something and something changes, maybe you can just start over and do it again like it's a different thing. PROFESSOR: All right, hold on. So in a digital game? AUDIENCE: Yeah, trial and error. PROFESSOR: OK, so in a digital game, because some games let you save state and then reload state. So you can say, well, what if I tried this decision? And then there's an interesting phenomenon that goes with that in a lot of strategy games called save scrubbing. And that is where you know that the outcome of something is based on a probability. And so if you save and reload often enough, you will always be successful. And that's an interesting strategy. This is completely aside from what I wanted talked about today. But it's an interesting strategy that game designers use. And that is to-- in a digital game, save some random number seed at the time when the save is made so that outcome's always the same. I expect tech support calls when people when you implement that, though, because people are like this-- your random number is broken. I tried this thing 15 times. And it's supposed to have a 90% success rate. And I always fail when I do this one attack. And it's like, yes, because you saved the random number seed. So that is actually a problem because people have this concept about how the mechanics of what does 90% probability of success mean? And I am specifically thinking of XCOM right now. If anyone wants to play that. 90% probability of success to a lot of people means that most-- means that it's going to succeed. Now for a lot of people, probably a lot of people in this room, this means you will probably succeed. But for a game whose pushed the random number seed has been saved and you fail and then you reload from the start, it means you will always fail because that's a different concept of how the random number generator works. It makes a lot of sense to people who are game designers or computer scientists but may not make a lot of sense to a lot of players. I get to re-roll again when I save in those, right? But now you get to re-roll exactly the same time, exactly the same way, which means you've got to come. So this brings me to the second reading, which is Don Norman's first chapter in the design of everyday things. The book used to be called The Psychology of Everyday Things, which had a nice little acronym him of POET. But then people had trouble finding the book because he was looking for design books. And it was shelved in the psychology section. So he changed the name to The Design of Everyday Things, which I think is a really interesting application of the kinds of things that he's talking about. Do you expect to find this book somewhere and is not in that section? So you make a change and you iterate on it. And if you look at the copyright, it actually still Psychology of Everyday Things. He talks about visibility. And what does visibility do in a syst-- in a design? AUDIENCE: It helps people understand the qualities of what they're doing or what they want to accomplish. PROFESSOR: Yeah, they have an intent, right? They want to accomplish something. And it gives them a clue on what they could do to accomplish that. So already, that's a direct application to games, right? You have a goal in mind. It's like I want to accumulate more cash in this game. I want to-- I'll produce my opponent or something like that. All right, what are all the things that I can do? What's telling me might-- what's in front of me right now? Maybe in a board game, maybe in a card game, maybe in a visual game, that's telling me visibly, right now, that this might be the way I get to do that. In a lot of strategy games, some of these things are very literal, right? So and so technology gives you this bonus. It's very-- I guess a little of the role playing games also have this, right? I want to hit things harder. Oh, look, this thing just needs plus one to attack. All right, OK, that that's a very literal thing now. How much effort you need to do, you need to go through to actually find that piece of information and whether plus one is actually a meaningful difference at all. It's depending on the design of the game. But visibility has something to do with the intent of the player of the user in Norman's case. But he's not talking about games. He's talking about the design of everything. And what the system can actually do, the actual operations of the system. So I'm playing a game like-- I'm playing a real time strategy game, and I have a tank. And I want to make it do doughnuts. And it's like, well, the system doesn't actually support that. There's no physics simulations to this tank. So I need to be able to convey to the player that this is not something you can actually do in the game. You can tell which hex to move your tank, but that's it. Maybe the tank doesn't even turn, right? So the other thing is this concept of mapping, that there is this-- again, it has to do with the player's intent of what they want to do, what they want to accomplish. But mapping, instead of the actual operations of the system, actually has to do it for what you can see of the system. So there are affordances and there are constraints. These are both words that are introduced in that reading. I think affordances is introduced in this reading. What's an example of an affordance? AUDIENCE: If you can sit on a chair? PROFESSOR: You can sit on a chair? What about this thing tells you that you can sit on it? AUDIENCE: It seems sturdy, and it's got a place for your butt. PROFESSOR: Yeah, it's got a nice little butt-shaped thing, here right? It's not made of spikes. It's got at least three legs, which may help, and evenly distributed, which means that it's not going to tip over. AUDIENCE: You can also measure it related to other objects to [INAUDIBLE] also. And that you-- PROFESSOR: OK, there's a little bit of cultural familiarity with other chairs that you've seen. AUDIENCE: What about a handle on a door? PROFESSOR: A handle on a door. What does a handle on a door allow you to do? AUDIENCE: It's like-- it's a place for your hand. PROFESSOR: It's hand-shaped. It's-- AUDIENCE: If you hold your hand out in a natural way, you grab-- looking at the text. Oh, I wasn't thinking of that handle. I was thinking of the vertical type of handle. AUDIENCE: The one that looks like a U-shaped tube-- like that. AUDIENCE: Yeah, exactly. PROFESSOR: So yeah, it's kind the right shape. It's one of those door handles or something like the size of a supporting column. You wouldn't necessarily think that you had to grab it, right? AUDIENCE: An outlet is for sticking things into. You can see these holes thing. PROFESSOR: Like fingers? AUDIENCE: They're not finger-shaped. PROFESSOR: Because they're not finger-shaped. Yeah, although it has a kind of weird happy face on it, which I always thought was a little bit strange about-- that might have been designed, too, actually. That might have something because if you-- this won't kill you, really. So you should put it in your home. I want to find out more about the history of the Edison plug. AUDIENCE: It's actually-- the design with the ground on the bottom is a bad idea because if something starts falling out that are too exposed to it, it will not be the ground one. So it would be better to flip it. PROFESSOR: Where's the ground in the mid- up top? AUDIENCE: Yeah, because then even if it starts coming out a little, then something falls down between the-- PROFESSOR: So it's bad design. It's bad design because you want to put it on your wall in a way that it smiles at you all the time. But when it's the other way around, it's actually a little more stable. It's actually like a mo-- British plugs, actually, have the pin usually on top and-- AUDIENCE: The British just have the eyes [INAUDIBLE] pin. There's [INAUDIBLE]. Right? PROFESSOR: That would be the-- what would it be? AUDIENCE: [INAUDIBLE] AUDIENCE: For two round circles. PROFESSOR: Yeah, yeah, the two round circles, yeah. AUDIENCE: They're using-- PROFESSOR: Dan, I think you're thinking of your pins, which are round. The original size is flat. But they are twice the width of the thing. You could stick your finger in a British pin, which means they have to design all kinds of protection mechanisms, plastic springs, and things like that, just to be able to prevent you from sticking your finger in it. It does have-- yeah, this one's actually nice because the biggest hole in there is the lethal hole in there, is the one that doesn't have any current running through it. So an affordance is something which suggests this is how you can use it, right? Something that you can get hand around suggests you can grab it. Something with a movable hinge suggests that's the direction that you move it in. So they talk about materials like wood and glass, right? Glass is for looking through. Glass is for smashing. Wood is for holding things together. And wood is possibly for writing. You've got this porous material. It paints very easily, that sort of thing. And in games-- let me just bring in an example. Let's try to identify the components of-- how many of you have played put before? Really, really-- this is the box. Let's talk about things in here that's a rule sheet. I guess it affords reading, but I'm not going to talk about. It has this thing. Actually, what you do with this thing? AUDIENCE: Ring it. PROFESSOR: Slap? [BELL RING] Yeah, that's what it does. OK, all right, so now that you've seen what this thing does. What is one of the things that this-- not in the rule book Pit completely. If you have this in your game, what is this thing good for? AUDIENCE: Getting attention. PROFESSOR: Getting attention, yeah. It's loud. AUDIENCE: Annoying people? PROFESSOR: Annoying people. You could use it to annoy people. It's like-- you want to lock out what they're trying to say or something and just keep hitting it. AUDIENCE: Signalling that you're the one that completed the objective. AUDIENCE: [INAUDIBLE] PROFESSOR: OK, yeah, so the completion of something, the end of-- because it's very percussive, right? It's not just a loud-- like an air horn. It goes eh. This one actually has-- [BELL RING] --very, very sharp [INAUDIBLE] sound. What else? What else about this? AUDIENCE: It's shiny. PROFESSOR: It is shiny. Makes you want it, right? AUDIENCE: You can hold it in your hand. And so you might be able to conceivably pass it around. PROFESSOR: You could pass it around. This could be controlled by different people. It doesn't necessarily-- it's not a huge thing for anyone. AUDIENCE: There's only one of them. PROFESSOR: There's only one. If you only had one, so then that becomes even more desirable because it's the only-- [LAUGHING] AUDIENCE: Seriously. AUDIENCE: Nice. PROFESSOR: Something that may might be able to stand nicely on a flat surface. But it's not rubberised or anything of that. So if you put it on an incline surface or something like that, it won't stop sliding. So this implies that it's going to sit on a table somewhere. It also comes with a bunch of cards. Ooh, whoa. What happened to these cards? Good Lord. OK, now the paint could rubbed off or something that. So don't worry too much about the text and the graphics. But just look at the card. What do cards allow you to do? What are the affordances of cards? Hm? AUDIENCE: You can hold a couple of them in your hand. PROFESSOR: You can hold a couple of them in your hand at once, OK. AUDIENCE: Concealing. Because there's two sides. PROFESSOR: Yeah, there's a side that you can put no useful information on, right? Besides the brand of the game, sure. But all identical, so you don't know what it is. AUDIENCE: Collect. You can have a bunch stack onto each other. PROFESSOR: You can hold a lot of them in your hand at once. What else? What else about this? AUDIENCE: Pattern recognition is really easy. PROFESSOR: The way how they've been designed makes it possible for you to do pattern recognition. Something which they didn't do is use different colors, which might make it even easier, at least for people who are not colorblind. But they've arranged similar elements in the same place. And they didn't use colors to denote the numbers. What else about these cards? AUDIENCE: Just cards in general-- you can exchange them. PROFESSOR: Yeah, I guess that's up to you. And in fact, Pit does that. Pit's one of those games where exchange is a real time thing. We're probably play it later in the semester because it's so light. And so easy. Yeah? AUDIENCE: It's easy to have a deck and then draw from it. PROFESSOR: Oh yeah, you can have this randomizing thing where you just have a whole stack sitting on the table. And then you don't know what you're going to get. And you just grab one. Amazingly, it's actually pretty easy to just grab one as opposed to five at the time. AUDIENCE: It's easy to mix them up too if there was a really good random aspect to the game. PROFESSOR: Because of this? Because of shuffling? Yeah. Or 52-card pickup. It fits in a hand. It's a little bit smaller than it needs to be in order for you to hold it comfortably like that. But these particular cards are a very, very good size for shuffling. AUDIENCE: They're black. You can put them on tables. PROFESSOR: You can put them on things like tables, yeah. You can deal them. You can flip them upwards and downwards, sure. AUDIENCE: It's also the fact that then you have to put either face down or face up, you can't really place them on their side. AUDIENCE: Yeah, they're really terrible for building things out of. It's possible but really hard to make something, to make a card stand up. So it makes it really obvious that it's either this or this. Other orientations other than that face up or face down aren't really considered, aren't really part of the game. AUDIENCE: I was going to say that they're rectangular? PROFESSOR: Yep. AUDIENCE: So going back to the point of orientation, maybe they're vertical and horizontal-- PROFESSOR: Mm-hmm, it's a [? tap ?]---- the path it [? tap ?]. AUDIENCE: It'd be hard to do with a square or circle card, but a regular shape, an elongated shape. PROFESSOR: Conversely, a square card could afford full rotation in any direction, rotation. We'll get that, actually, in the next one where you can just rotate things around. AUDIENCE: Stiffness and shininess of them distinguishes them from stacks. In some games you have papers that you write on them that are disposable. Pass a paper. PROFESSOR: This is supposed to last a little bit of time. Multiple play sessions at least. So yeah, a bunch of things that cards do are already that you've identified which are all very accurate. And that's a lot. Cards do a lot. And when you're designing a game, you need to think about whether cards are the right thing for your choice, for your game. And you've gone through a pretty deep analysis about what this thing does that might make it appropriate. Something that might be a little subtle-- the rounded corners actually make it much easier for you to do things like this. If it wasn't around the corner, it's actually pretty uncomfortable to do a fan. It's not like you couldn't. You totally could. There are stationary stores actually do sell punches, corner punches around of your cards. It is not something that I would actually recommend that you do during prototyping because it takes too damn much time. But if you were to design a game for home, for your family, or something like that, and you want to make it a pleasant experience. You might just want to spend $2 on a punch and just punch the corners out. It's really, really hard to do it consistently when using your hands, by the way. So it could take away the whole information hiding things. But oh, the one that was badly punched, that's the joker. Let's see. That's, oh, that's next. Let's see what else I talk about? So mapping, so back to the idea of mapping. You've got your intent. Here is something that you want to do as a player, maybe hide information from other people. And then there's the affordances of the system. Now if I wanted to hide my cards from you, then I will hold my cards in a way that only you can only see the side that doesn't reveal any useful information. So that's a very, very direct, clear what he calls natural mapping, although I am not quite sure that phrase is very easy to use in practice. It gets a little bit more complicated when you actually look at the system that the game is trying to reproduce, right? So far I've just been talking about cards. I haven't been talking about what the rules of the game are. How many people play Carcassonne? OK, a couple of people. We should be able to get a chance to play this later this semester. I'm pretty sure it's already in the syllabus. So there are a couple of things in this game that is this board. There's a back of the board, which is not colored. And it has a design on it. We could just pass this one around. It's got little playing pieces that are referred to as meeples by the hardcover board game fact base, I guess. They look like little people. Actually, there's probably enough in that for everyone to grab one or two. And then you can just take a look. I want them all back, but you can take a look at them. Whoops, and a bunch of tiles that I will also hand out. I'll hand out half to that table. AUDIENCE: Something you can look at. [INAUDIBLE] PROFESSOR: Just take a look at the pieces, and let's start with the tiles. Some of you have gotten just the meeples. Some of you have gotten the tiles. What do the tiles suggest, just by looking at them? AUDIENCE: [INAUDIBLE]. Terrain? PROFESSOR: Terrain, all right, something to do with land. What else? AUDIENCE: The various terrain features seem to match up. PROFESSOR: The various terrain features such as the-- AUDIENCE: --rivers and roads. You can match up. PROFESSOR: Yeah, they line up nicely when you put the tiles in a grid with other tiles, right? Yeah, there was a hand back in the back room? No? AUDIENCE: You can rotate them. PROFESSOR: Yeah you could ro-- AUDIENCE: Squares. PROFESSOR: Yeah, squares, because unlike the cards, which is taller and than it is wide or wider than it is tall. This one is the same length-- more or less the same size from all directions. So the idea is that maybe you could freely, just freely rotate these things. AUDIENCE: They're identical with a back. PROFESSOR: They are all identical with a back? And that tells us about these tiles, something that we already know about cards. AUDIENCE: You want to hide the information. PROFESSOR: You want to hide the information? AUDIENCE: Or. PROFESSOR: Or-- AUDIENCE: [INAUDIBLE] PROFESSOR: Or you want to shuffle it, yeah. The fact that you've got a hidden back gives you quite a lot of different possibilities. This rule-- this game in particular uses mostly because of this shuffling and this randomization. You don't know what tile you're going to draw. So for people who haven't played this game, and you're looking at all these tiles, what do you think you do with these tiles? AUDIENCE: You match them up [INAUDIBLE] on them? PROFESSOR: Congratulations, that's Carcassonne. You figured out they game. You match things. You make big things. You put people on them. That track that's going around, the board that's going around, I've got one that is probably a little bit less insightful for this particular lecture. Anyone want to guess what that is, someone who hasn't played the game? AUDIENCE: [INAUDIBLE] PROFESSOR: You haven't played the game, right? AUDIENCE: Yeah, I've played some other games, but it's probably just a scoreboard or something. PROFESSOR: There's a scoreboard. AUDIENCE: You do something to move you along the path somehow. And then the first person to reach it probably wins. PROFESSOR: Yep, OK, good. Well, yeah? AUDIENCE: Well, it looks like it connects back. So it makes me think that maybe instead of winning by just getting around, maybe every loop, you get a new tile or something like that. PROFESSOR: OK, it keeps going on and on. It's a combination of all three. It is a scorecard. If you loop around, I think it means that you've got 100 points or 50 points. So you just add 50 to your score every time that you go around. And what fits on those things? What would you place on that board? AUDIENCE: The little people? PROFESSOR: The meeple. You place the meeple. You place the meeples on the tiles. You place the meeples on the board. You wouldn't place a tile on that board because there's no hexes there. So you've already got a mapping of probably what intent that you've got. Let's make some big things and put our people on them. And the affordances of what you can do with these tiles that suggested that to you. This is a game about making large patches of similar things and then putting people on them. Oh, my notes are over here. Why does that keep coming on. I keep giving private information. PROFESSOR: I'm sorry you keep being close to my family photographs. So there's a whole bunch of ways that you can help people with these sorts of mappings. We've be talking a lot of visual and physical. I guess Donald Norman would describe a lot of these as spatial, metaphors to use. Spatial mostly to describe things like driving in a car and you turn the wheel to the left specifically toward the top of the wheel to the left. And you car it's directed to turn left, that sort of thing. There are sort of bodily metaphors as well. Things that are high are either supposed to be good and happy, things that are low when you're feeling depressed. And you can't pick yourself off the ground, making you sad or bad. A lot of these metaphors are actually arbitrary. A lot of us have learned them through culture and socialization in a world. But that means that these might be things that you can play off. And we're going to do a little bit more detail. I'll give you a couple of more examples in about two weeks when we revisit the idea of user design. He talks about things like single control, single function where if you've got something that does something, you might not want to make it do yet another thing on top of it because that starts to get really, really confusing. The meeples, for instance, yeah you place them on the map, but you also place them on the scoreboard. I personally think this game will be a little bit easier to learn if they just gave you a different piece for the scoreboard that was the meeple-- the same colors, just a slightly different piece-- because pieces never move from the scoreboard to the tiles or back. The other thing that I want to talk about is what do these tiles not let you do? Well, they did let you do that. But was it easy? AUDIENCE: No. PROFESSOR: No, OK, all right. They're also not very good building materials, just like cards aren't. AUDIENCE: Holding them is kind of hard. PROFESSOR: Holding a whole bunch of them is hard. They're really thick. And you saw the difficulty I had just trying to pull half of them out of the box. They were unwieldy. AUDIENCE: Shuffling. PROFESSOR: Shuffling's tough. You can-- all right, maybe I'll do this once a game. AUDIENCE: Put them in a bag maybe? PROFESSOR: Hm? If you had them in a bag, yes. A bag is its own affordance, right? It affords its own like me picking Scrabble, for instance. How then Scrabble was really, really well in a bag. So if you wanted to randomize things, yeah, you could totally just put the whole back hiding thing and just put them all into a bag and then just pull one out of random. So these are constraints. You're not really supposed to have more than one tile at a time in Carcassonne. You are supposed to draw one, figure out where it goes, put it down. You're not supposed to ever hang on to two. So it's OK that the tiles are designed in such a way that it makes it hard for you to hold on to do. For the cards, for instance, you're not really supposed to stack them. They are not very useful to you when you've got them face down because you can't really see the information. To get a small number of cards, like in poker or something like that, maybe you could remember what you had face down. But if it's a large number of cards, like in Pit, you really want then held face up, facing you. So the idea of-- in Pit, you don't really ever put that face down. I think there might be a rule about placing them face down right at the end of the game. But you don't actually-- no, that's not even true. You don't ever have to place them face down. When they're face down, they're not interesting to you. You should have had that information in a minute. So these are constraints. These are things-- this is another way that the visibility of the system can help you with those mappings. You're looking at the system and the pieces that it's giving you, and you're thinking, what can't I do with these things? And that's probably not what the game wants you to do with these things if the game is designed well. Games can be designed poorly. That's a caveat. Everything that I say is a qualitative, subjective statement. And that goes back to something that was brought up earlier about mental models, right? As you play around with these pieces-- of course when you read the rules and you see the illustrations, maybe when they read it at the back of the box, I would suggest that usually you start trying to figure out what a game is when you pick it up off the shell and you start looking at it. And it says, hey, this is a game that I want to play. It says Deluxe Pit, over 100 years of-- 100 years of card game fun, geez 100 years. The front says 1904. So this is probably not a science fiction game, all right? Actually, for people who haven't played this game, what do you think this is about? I don't think I've talked about it. AUDIENCE: Maybe about the stock market or something. There's a bowl on it. PROFESSOR: There's a bowl on it. AUDIENCE: There's a trading pit maybe. PROFESSOR: Yeah, the title of the game suggests things. It isn't like a commodity trading game. On the side, it says, "Corner the Market." AUDIENCE: And it's got a bell. PROFESSOR: I wonder whether they included the bell and over 100 years of card game fun intentionally. This is like, this game back in the time when stock market are run with bells. I guess they still are but mostly just run with computers nowadays. I'd love to see Pit done on some sort of updated 21st century thing. AUDIENCE: [INAUDIBLE] with insider trading. PROFESSOR: So there are, of course, the text at a back that tells you not only what the theme of this game is. Shout your deal and trade your cards to corner the market, et cetera. And then it also says, family age seven plus, 30 minutes, three to eight players, just to tell-- to give you a better idea. And when you look at this thing, I'm immediately forming a mental model of how is this game plays. You know that you're supposed to collect cards because they show you a whole bunch of cards. You know you're supposed to slam a bell at some point in time. And then you're just reading the rules. All right, so when do I do this? It's pretty easy. Let's see. When it comes to Carcassonne, there is actually a deep, deep problem with this game despite how popular it is. And that scoring is actually pretty difficult to do. It's a math intensive problem. It does largely map on to how many of the meeples that you have of your own color on large patches of things. But how much those things are worth, those patches of things are worth, requires a lot of counting, a lot of counting. And that's why you need a scoring track. So that is a big problem with that game on how the mapping works. It doesn't really give you a good idea or a good conceptual model of how much something is worth. But you've got a large patch of thing. You've got your dude somewhere in there. You can at least make the mapping. That's worth something. And then that's-- the feedback that you get back from games when it's Pit-- the feedback that you get when you hit the bell, right? There's a huge ding sound. That's the feedback that you get when you try to fit a piece that doesn't fit into something else. And there's a couple of board games where you have to put pieces together, and that gives you an idea of maybe those two pieces don't go together. What others things look for in the realm of feedback when it comes to board games? AUDIENCE: Maybe a track? PROFESSOR: Hm? You mean a board track? AUDIENCE: Yeah, you want to stay on the track. PROFESSOR: OK, all right, that's-- actually there are games where the only legitimate places that you can put your pieces can be is literally on the pieces that they provided you. So if you fall out of it, you are no longer-- that's not a legal place for you to place your token. AUDIENCE: So it depends on the game. But for a very easy example, in a game like Risk would have-- you can very quickly look at the board and say who has the most soldiers. It doesn't necessarily mean anything but it's a good indicator. Just something like that would usually be nice to see you just quickly look at it and say, oh, this person's winning for this reason. PROFESSOR: So the sheer quantity of similarly colored things. AUDIENCE: Yeah, it doesn't even have to be [INAUDIBLE].. In Monopoly, if you look at the board, whoever has the most houses. [INAUDIBLE] AUDIENCE: Similarly, in Catan, you can see if somebody has massive roads or a lot of settlements or cities, you can usually tell that they're doing pretty well. PROFESSOR: Mm-hmm, that's, again, direct visual metaphor. And, of course, in that game, you earn points by having the longest road to emphasize it's a good thing even though it's already a good thing to have in game. What else? Other players can give you feedback, right? If people remember playing Code 7 7 last week, there's information that other players can provide you that you need. But other players have to provide you. So what other games can you think of where other players are your primary feedback mechanism on whether you've done a move that's OK or not? AUDIENCE: Mound Builders? PROFESSOR: Mound? AUDIENCE: The only feedback you get. PROFESSOR: It is the only feedback you get, OK. AUDIENCE: Poker? PROFESSOR: Poker? Did-- AUDIENCE: If you make a bet that is-- say you're bluffing and you make a bet that is good, you know when everyone else folds. PROFESSOR: OK, yeah, for a certain kind of bad, it's like, I won it. Everyone folds, and it's like, wait what. I had a royal flush. Why'd you-- That was a bad idea, right? OK, that gives you some sort of feedback for a very specific kind of thing. AUDIENCE: [INAUDIBLE] PROFESSOR: Oh, yeah. You stick a card with something written on it. AUDIENCE: Yeah, and particularly based on [INAUDIBLE].. PROFESSOR: Mm-hmm, again, it's like Code 7 7. Everything that you know about the thing that you are trying to guess is something that only other people can see. AUDIENCE: The game Mafia. All the feedback is filtered through the hosts. PROFESSOR: And both useful and possibly confusing feedback, right? So yeah, all the information you're getting in the game is through other people in Mafia. AUDIENCE: Battleship? PROFESSOR: Battleship? Yeah, there's again the hidden-- information that's hidden from you, but it's completely available to your opponent and vice versa. And you don't need me to tell you the mechanics to tell you that. There is a computer battleship. It's not as fun. Charades and Pictionary-- usually your teammates are the ones who are guessing. So I wouldn't necessarily call that feedback because it might be good feedback on whether your clues are getting across to your teammate. But your opponents are also usually some sort of feedback mechanism that's keeping time, that's keeping an eye out for or listening for illegal things, like if you said something very Pictionary. If they hear you say something, they go ah, ah, ah. You can do that. And are there a few more hands or something like that? So just always remember that you can employ other players into your feedback mechanism. It doesn't always have to be your game alone. AUDIENCE: There is a game I played a long time ago called Scotland Yard. The player positions is almost never known. You get some feedback about what he's doing, sort of what he's doing. You have that in terms of the information the gamestate that's hidden from you, feedback on whether or not you have achieved your objectives from the player that knows the information. PROFESSOR: Yeah, I think it's like something like four or five detectives chasing one fugitive going through London's mass transit system basically-- buses, cabs, underground, yeah. And the person who's running knows where they're going at any given time. The detectives are working on partial information to try to corner and close this dragnet. So I think that's on the list. Do you remember it? That was on the syllabus. It used to be-- Scotland Yard. AUDIENCE: I'll double check it, I think I saw it. But we may have changed the game-- PROFESSOR: Yeah, so we might get a chance to play that. But that also falls in mastermind category of games where it's like, here's this person with all the information, only that person is changing the information as the game goes on. That's a big difference in Scotland. AUDIENCE: Did remove it this year-- but we do have a copy. PROFESSOR: OK, maybe we'll bring it in. It is a good game. So one of the things that Donald Norman ends his very first chapter on is that why is it so hard to-- he asks this question. OK, we've gone through this huge list of things in his book, including things like light switches-- well, actually, he hasn't talked about light switches yet. He will talk about light switches, something like that. He talks about cars. He talks about doors. He talks about clocks. And he asked this question-- why is it so hard to actually make something right? Anyone remember? Or anyone thinking of-- AUDIENCE: Doesn't the designer never really communicates one-to-one with the user? They're communicating through the object that either was designed or being used? PROFESSOR: That is definitely true. It is a second order problem. You're designing something that then becomes this manifestation that somebody else uses. And that's actually when all the problems occur. You first, then. AUDIENCE: Well, he was talking about how like at first when something is invented. At first it's very complicated, and then your iteration comes up with something very simple. But then people want more functions. And they start this whole process over again where you keep adding new things. But it becomes less and less intuitive. PROFESSOR: Market forces push you to add things, to do additive design in order to distinguish yourself from the competition. And that naturally leads to complexity in interface. AUDIENCE: And a lot of products, you said, don't get through that process because it'll take five or six times to get it right. But if it's not good by the second time, people just won't buy it. PROFESSOR: Yep, so get back to iteration. Five or six times means five or six times at the same problem, right? So iteration is the reason why design starts off as being very clunky. But it can eventually become something that works well, communicates well, or something that people can learn and maybe even enjoy in the case of the games. One thing that's funny about the book is that it talks a little bit about the clock radio that could do-- make phone calls, and be used as a desk lamp, and keep track of your appointments, and used as a TV. And I'm still thinking, this isn't that this, right? Exactly the same thing they he's describing. And it's interesting because he described a phone that can only be used with two buttons, right? Here's a button. And then here's a huge button. You get to select things. And it's exactly what he's talking about only, I doubt that actually imagined that this was something to be possible at the time when he wrote it. That was 1980. It wasn't that long ago. And cell phones obviously have gone through a lot of criticism. iPhone, of course, has a lot of criticism because people who criticize it for being a lock-down system where you can't really do all that much. It's certainly not as customizable as Android system. It's expensive for what it does. But then, arguably, by locking out a lot of things that you might want to do but maybe don't have to do, they are trying to make it easier for you not to do the wrong thing. So depending on-- there are different ways to fall on this camp. But it is it has been successful through a number of different reasons. Don't discount marketing as being something that the does sell. But what I want you to think about now is actually the process of prototyping. Actually you know what, I'm not going to go right into prototyping because they will probably make more sense once I've actually got the prototyping materials out. What I am going to do is to give everyone a five minute break. We're actually going to come back and play some of last year's games because I think there's enough time for it now for about an hour. And then the last hour of class, what I'm going to do is I'm going to go into brainstorming. So then you can start forming your teams, thinking about what kind of game that you want. Prototyping might be something we'll leave up to Wednesday. So that there'll be a little more time for you to work in your teams. So I'd like to get all the Carcassonne bits back. So and then we'll pick this up in about five minutes.
|
MIT_CMS608_Game_Design_Spring_2014
|
21_Social_Play.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, let's talk a little bit about today's reading. [INAUDIBLE], this giant book that you see in front of you and that you have the link to-- how many of you, by the way, read the actual book and how many of you just went online? How many got like a library copy? [INAUDIBLE] It is basically divided into three sections, there's rules, play and culture. Basically it's a collection of just what other people have said, but tried to organize in a way that makes for a pretty good textbook. The first chunk is all about analyzing games, of systems, of rules, the bits that actually go into the construction of games. The play section, which is what today's reading was from, is really about what people do with these games. And then culture comes after that, which is the world around the games, the world that might have been created by the games as people actually interact with them. I don't think we have that many readings, this might be the last reading from this book. Yeah, let's see, culture. We've got-- oh, there is? OK. There is one more reading, but I don't know if it comes from this book. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. Bernie De Koven is the last reading, and that's going to be Wednesday, I think? And that's going to be-- that's one of actually the writers that's cited quite a lot in today's reading. And it's a much smaller book. It's this thick and it's easy reading. But it's a good jumping-off point for this class because this is literally where the book ends on thinking of games as designed objects. And things that could stand alone as a product, there's a set of rules that can be passed on from generation, this solid thing, into what people make out of all of this. And if you're interested in that, then there's another class-- I think it's CMS 616 Games and Culture. And it should be offered-- I can't remember if it's a fall class or a spring class. Spring class? OK so it's offered this semester. If you're not a senior and you're going to be around for another year you might want to consider that if you're interested. To sort of see where it is going. And obviously there are a lot of researchers here at MIT that are starting to get into the realm of sociology. There are folks here who study history of games, folks who study what people currently do with games. For instance, how many of you have met or taken a class from T.L. Taylor? A few people? Yeah, OK. So she comes from such a sociological background. A lot of her work is ethnographic, which just means it's like a researcher going into a native culture and seeing how people behave there. And what they value, and what are the common practices. And she does it with massively multi-player online games. Now she did e-sports, that was the most recent book, now she's looking at live-streamers. And she has a lot of research, a lot of information. So that's a good person to learn from. Some of you also have come from Todd Harper's class. He's a postdoc here in our lab. He's taught quite a number of classes, both CMS 300 and CMS 100, as well as his own game design for expression class, where he looks a lot at competitive styles of play-- I'm sorry, competitive communities of play. So fighting games community was his dissertation. Looking at League of Legends right now, I think, pretty much what he is looking at now. As well as some other aspect like queer games and the people who play and make them. So that's another good angle to look at. So fact to today's reading. There's a couple of really, really useful ideas in it that can be used for a variety of ways to look and analyze your game. Especially when you're trying to observe people playing your game and trying to figure out what's going on here. And the games that we've got today, including some of the games that John has are all the sorts of games where what's interesting about the game isn't so much what's written in the box, what's written in the rules, but what happens between people. And I would argue for many, many games, even the ones that are traditionally recognized for the magic of what's inside the box, what's actually interesting is what happens around the game. Last week we talked about Go, for instance. And it's like, OK, what's actually written about the rules of Go is not that elaborate, right? Really, really simple, and mathematically is really elegant. Entrancing in the sort of way that mathematicians like an elegant formula. But what's also interesting is centuries and centuries of strategies and culture that have built up around this particular way of playing. So there's a couple of useful tips that help us think about what happens in a game socially. That's internal-- internal interaction and external interaction. So let's talk about internal interaction. What do you think Zimmerman's talking about when he says an internal social interaction in a game? AUDIENCE: So I think he's talking about how you feel about your role. So different people have different jobs in the game. So I'm supposed to be doing something different than you're supposed to be doing, even though we both might have the same goal, might not even have the same goal, but the same ideas to win or whatever. PROFESSOR: And the reason why you have those differences is set up by basically how the game's rules are set up, right? AUDIENCE: And so the internal social [INAUDIBLE] might be my idea about that. My idea of what my role in the game is. PROFESSOR: OK so I think I'm mafia, right, in the game of Mafia. I'm the mafia, or one of the members of the mafia, so I think I'm going to play this way. The game rules tell me about when I can communicate and when I can't. Basically everyone can communicate at the same time then everyone shows [INAUDIBLE] And then you come up with the strategy of how you're going to play it. It evolves from the roles that was given to you by the rules of the game. How about external interactions when it comes to social play? What's the flip side of that? AUDIENCE: Friendship outside of the game [INAUDIBLE]. Mafia's [INAUDIBLE] Say your friends not one of the Mafia. You'd be like, oh, he's my friend. I'll save him for last. PROFESSOR: Yeah, I'll torment that guy for last because he's my friend. I'm not going to eliminate him right away. AUDIENCE: Another interesting one was, I think, also the past experience with Mafia. I know that a person's really, really good at Mafia, so I'm going to kill them on first round just because I don't want to deal with it. PROFESSOR: Right, I just want to get them out of this game. Sure, yeah. AUDIENCE: I think that happens with a lot of games. Just like you team up on [INAUDIBLE]. PROFESSOR: Couples playing a game and that's supposed to be all where everyone's supposed to be playing an individual game. Yeah, knowing that someone is a pathological liar. So that actually also comes in design, by the way. I used to write games for the MIT Assassins' Guild. And there were four people who were playing games at the same time when I was writing games, and we always used these four people as ways that games are often broken mostly through weird social interactions. Someone who's going to be the rules lawyer, for instance. And someone who's going to stop the game in order to make sure that his interpretation of the rules is what everybody else is going to play with. Not necessarily to make sure that the rules are adhered to, just make sure that his interpretation is going to work. Somebody who's going to stop a game because they're always going to keep asking for reminders on how the rules work. Just like, uh, I can't handle this right now, all this is supposed to be happening simultaneously, too much math, stop. And I'm going to continue that, so this is the second kind of person that I had to design for. The third kind of person, the person who is so capable of just summoning the collective brainpower of everybody to be able to churn through any kind of puzzle that I can create, because this person was such a convincing speaker that basically he can use your head like a node on a parallel computer. You would just say, can you do this thing for me, it won't take a minute, and somehow he will do it. This person had that social skill. I'm trying to remember what the fourth one was. Oh yeah, there was one person who could do the same thing entirely on his own. If you put a puzzle in the game, the puzzle will be solved by this one person sooner or later, if it wasn't already solved by someone else. So here's this giant macroeconomic system that I've set up, or something like that. And that one person will figure out the optimal way to solve everybody's problem simultaneously. I could design for that. So I could design to make sure that each of these people don't break the game, but still can do the thing that they want. Some of the folks that I'm describing actually enjoyed that, some of them didn't enjoy that. That's just how the brain works, but if the game falls to that then they I'm not having any more fun than anyone else. So I knew something about who was going to be playing my games by knowing that these four people. I could cover a very wide range of different ways that my games could just fall flat on their faces. These are real people, but anybody who plays my games could exhibit that same sort of behavior at an opportune time, I have to make sure that my rules could hold up to that. So when you design your games, how many of you have somebody in mind? Someone particular about who's going to end up playing the game. Friend, family member, person in class. AUDIENCE: For me, I think when I worked on the game [INAUDIBLE] Hurricane, I think that it definitely reminded me of specific friends that I played Twister with in the past, a game that I felt was a game to get inspiration from. But sometimes I don't have [INAUDIBLE]. AUDIENCE: The easiest one for me is just developing for myself. And people who enjoy the same types of games that I do. If I find a game fun, then people who enjoy the same types of games that I do will likely find it fun. If I don't enjoy the game, I shouldn't be making it. PROFESSOR: Yeah, you are a sample point, you are a valid sample point. It just happens to be one that you know very well-- yourself. There is a mantra out there that often mentioned by people in the indie game development community, which is you make a game that you want to play. And I'm not entirely sure that I agree with that. But I can see where it is coming from. Because if you can't satisfy yourself, then the likelihood that you're going to satisfy any other audience member out there who would actually play your game is lower. But I think there is something about having to design for people who are not you which is also valid but obviously it's harder. AUDIENCE: I've heard it described as a Venn diagram. One is games I want to play. The other one is games other people want to play. A third is games I can make. Ideally, you want to be right in the center. PROFESSOR: Yeah. Vast set of ideas that you have in your head, only a small group of which are things that you actually want to do. And then a smaller group of that are things that other people want to do. A smaller group of that is something you can [INAUDIBLE]. That makes sense. The feasibility thing is what we've been addressing this entire class. How do you actually make a game that you can make, as opposed to make a game you wish you could make. So if we've got lots of different kinds of people who you're making your games for, people who you played similar games with in the past, for instance. People who you know are the kind of people who tend to-- who won't break your game. When you're making the Twister game, you have to make sure that you can play the game without breaking it, because you are physically larger than a lot of folks and able to reach much further. And make sure there are rules, that you can design your rules for that sort of thing. But then there's also broad categories of people. And that's why we get into it the reference to Richard Bartle in today's meeting. I did not include the whole article, but the whole article is online in plain HTML. If you just do a search for Bartle-- I forget the name of the article, but if you just do a search for diamonds, hearts, clubs, spades, Bartle, you'll find it. I think it's "Players Who Suit MUDs" or something like that. A lot of that work was created when he was basically the originator of the very first multi-user dungeon. Anyone heard of a multi-user dungeon? How many have played one of those. I've played way too many. You see people playing, when you're reading the text that is typed on the screen. AUDIENCE: [INAUDIBLE] PROFESSOR: OK, so how many of you have seen or played a text adventure? So text adventure, basically everything is text. You've got verbal descriptions instead of graphics of the rooms that you are in, and the objects you can pick up, and what you can do. You got an inventory that's explained to you text. You've got text commands you can enter. Usually when you're entering commands it's a verb and a noun. Get key, open mailbox, get [INAUDIBLE], that sort of thing. Some are just single words. Inventory, north. Actually the full north command is actually go north, but there are shortcuts where you can just type N. North, south, east, west. A multi-user dungeon, traditionally, was a multi-player text adventure. Everyone would just occupy the same space on a server, they would use something like Telnet, or maybe a dial-up, on [INAUDIBLE]. And call into a server that will be serving them ASCII text. And then simultaneously going to two other people. It's very, very important that in multi-user dungeons, multiple people are connected simultaneously. It's not like an asynchronous thing where one person connects at a time, even though they're all logging in to the same world. So not all that different from what you expect from MMO RPG nowadays, the massively multi-player online role-playing games. Only obviously no graphics, and much slower pace of game, because everything has to be typed. You don't just [? walk ?] [? off, ?] you have this inventory. All my examples are really fast examples. Give envelope to Bob. It's like you could actually give that kind of thing and then you just transfer an object to somebody else named Bob in your space. And he described four different categories of people that he often saw play his games that I think still holds up pretty darn well. So who remembers any of the four categories? AUDIENCE: The kind of people who really get deep into underlying [INAUDIBLE], who want to learn about how things work. PROFESSOR: The diggers, right? The miners. The people who-- Minecraft is a great example for a game that is designed just for those people. People want explore how the system works, how the environment works, who want to discover things and to see what's behind that next mountain. AUDIENCE: I'll go with [INAUDIBLE]. PROFESSOR: Killers or plugs, and what was the justification for what they do? What do they do? AUDIENCE: [INAUDIBLE] PROFESSOR: The people who are there for the other people, primarily. In a way, killers and the socializers are there because other people are there. There are different reasons. The diamonds want to win. That is by any definition of winning. Now MUDs traditionally did not have a definition of winning. You couldn't win that game. There wasn't any final bosses you could kill, so you had to find your own metrics for winning. So it could be there are PDP type diamonds. You want to rack up the highest tournament score or something, but you're not there explicitly to cause people grief. You're not trying to hit them when they're not looking. Sort of like we're going to organize a duel kind of thing. We're going to see who can play this game the best, as opposed to making people upset. There's a way to remember this that I think has really been useful. I find the playing card suits not so easy to remember, but I could actually remember the categories pretty easily by whether you are acting on something or you're acting with something. And the question is whether you are acting or interacting. I just put two different [INAUDIBLE] that meant [INAUDIBLE] It's also supposed to be this. Interacting with [INAUDIBLE]. And then at the world, all the systems, and then other players. So if you're interacting with other players, we've got the socializers. And people who are acting on other players are the killers. Acting on the world. [INAUDIBLE] AUDIENCE: [INAUDIBLE] PROFESSOR: [? Assignments, ?] which leads this to be the [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] PROFESSOR: These are the glitch finders. These are the people who figure out how to get outside the boundaries, your invisible walls. Who want to see whether there's a way they can get below the very lowest layer on Minecraft. And what happens when you cram so much explosives in one area that you can't perform this experiment without killing yourself. They're the ones who are going to do it. Because they want to see what happens. Whereas these folks are really a little bit more about how do I use this world to be able to obtain some sort of other goal. Which is usually points, some sort of achievement, some sort of recognition from the other players. They don't usually get the recognition, but they have a numerical number to show that they're deserving of it. And I find this to be a really useful way to think about what games can do to be [INAUDIBLE] to different people. Because the truth is that even though when you play a single game, you may be primarily one of these people, you may not play every game this way. Say I'm a [INAUDIBLE] someone who's trying to chase all the points. Someone who is trying to get the top of the leaderboard in [INAUDIBLE] or something. But that may not be the way that I'm going to play World of Warcraft. That may not be the way that I play contract bridge. I might be playing more socializer when it comes to contract bridge. And different people are often different things in different games. And sometimes different things in the same game. Right now, I am trying-- everyone in this room probably knows by now, I play too much StarCraft, right? Usually, I am trying to win. I am trying to raise my rank. They have a ladder system. Sometimes, I just want to zone out and talk to people. I just want to type-- I just want to go into free-text chat and be friendly for awhile. I'll do something very dumb that takes no actions whatsoever. And I'm just like, at least my fingers are going to be able to type my [INAUDIBLE]. And that's what [INAUDIBLE]. Different times of the day, different moods that I'm in, I'll do something different. And if you can make a game that would be able to cater to people in different moods, then that's great. You've got a game that can be played in different situations, or more importantly, among a group of people who are all feeling kind of different-- [INAUDIBLE] type. This is not the only breakdown that I've seen. There is a system that's been pitched by, I believe, a developer at Ubisoft Montreal-- although don't take my word for that-- Jason Vandenberghe. And he suggests, actually, a system on the OCEAN model of psychology. Has anyone heard of this? It's five big drivers of motivation that psychologists have, basically, created as a framework. And one of them is neuroticism. There are people who are motivated because they are neurotic. And I can totally identify with that. It's like they want to be-- it gives them comfort to be performing neurotic things-- things like hundred-percenting a level, or a game, or something like that. It's a motivator. It's a driver. It's no particular achievement to hundred-percent a level in Metroid, because a lot of people do it. But the fact that that number is there motivates a certain group of people to be able to bring that number from 99% to 100%. So I want to make clear that this is just a theory. This is just-- Richard Bartle-- creator of the very first MUD in Oxford, I believe-- United Kingdom-- and he just wrote an article on a website-- he would have put it up on a blog and blogged it at the time-- about, these are the people that I happen to see play my games. And if I-- I can generalize these people in these four categories. Let's see. Other stuff that's covered in today's reading includes the idea of people bringing things from outside of the game into the game. And we talked a little bit about metagames. And this article goes into a little bit more detail about that. So there's-- that's, again, four things that the book goes into when it comes to metagaming that-- and these were posited by Richard Garfield, designer of Magic: The Gathering. But he was-- the way how he was describing it is that these are all kinds of metagaming experiences. And I'm trying to make a game that capitalizes on people playing the game when they're not even really playing the game. When they're not sitting in front with their decks of magic, they're still playing the game, because they can do something else. And so, anyone remember what those four things are? AUDIENCE: Things that you bring to the game-- PROFESSOR: Yep. AUDIENCE: Things that you take from the game-- PROFESSOR: Right. AUDIENCE: Things that you do between-- PROFESSOR: Mm-hmm-- between games-- yep. AUDIENCE: --people-- between games. And the last one is-- AUDIENCE: Last one is, actually, probably the more traditional understanding of metagaming. So if-- even if you haven't read, you might have an-- or, you might be able to hazard a guess. You got through-- AUDIENCE: [LAUGHS] PROFESSOR: --spot on. AUDIENCE: [INAUDIBLE]. PROFESSOR: That's right-- things that are-- things that you do during the game that are not, actually, part of the game rules. So what do-- what are some examples of things that players bring to the game-- from outside of the game they bring into the game? AUDIENCE: So there's physical objects, but also knowledge. PROFESSOR: Right. AUDIENCE: So to the baseball game, it would be like your bat. PROFESSOR: Right-- your ball-- things that you might require to-- not have even start the game in the first place. AUDIENCE: [INAUDIBLE] it might be cards, I guess. PROFESSOR: Mm-hmm. AUDIENCE: But you can also bring knowledge. So if chess is a game that you're not familiar with that you study up-- they call it booking up-- on opening, different tactics, things like that that you can study, bring into the game-- basically, like preparation. PROFESSOR: Yeah, yeah-- so if you are going to call into a competitive game, you might come in with a strategy that you intend to execute. That happens in sports a lot. We train these particular maneuvers-- these particular plays. AUDIENCE: Even, like, metagaming [INAUDIBLE] that. I guess that might be [INAUDIBLE]. Magnus Carlsen-- the world's best chess player-- he played openings that are not that good, just to screw with people. PROFESSOR: When you're playing the players? AUDIENCE: Preparation, yeah. PROFESSOR: Yeah, and if you're playing the player-- I think that still fits within the category of fix your brain to the game, because that means that you are bringing to the game an understanding, or a guess, about what your opponent is like, and how your opponent may have played before. Someone's reputation is, actually, [INAUDIBLE] in today's reading as something that you bring to your game. It's not something that you voluntarily bring to the game. It's just something that precedes you. So then, what stuff do you take from the game? AUDIENCE: I mean, so I guess the typical items might be enjoyment from the game, or whatever. And Avery talked about [INAUDIBLE] Magic, where you win a card if you win the game. PROFESSOR: Yeah, yeah, there can be prizes. There can be cash prizes. There can be stakes. AUDIENCE: If you're gambling-- [INTERPOSING VOICES] AUDIENCE: --incentive [INAUDIBLE]. PROFESSOR: Yep, again, reputation. AUDIENCE: You can also take knowledge from the game. PROFESSOR: Sure, especially a game that you lost, right? AUDIENCE: Yeah, exactly. PROFESSOR: Yeah. AUDIENCE: I think that's what [INAUDIBLE]. PROFESSOR: Yeah, you-- hopefully, especially in a game that-- in the kind of game that I wish I could design one day, you take away something that deepens your understanding of the system that you just played. And that's wonderfully intrinsic motivation of something that arises from the rules. But it's not going to affect the game that you just played. It's going to, maybe, affect your next game. AUDIENCE: This is super cheesy, but you could take away new friendships. PROFESSOR: Oh yeah, well, social games-- there are people who end up in this area during-- AUDIENCE: [INAUDIBLE] enemies. [LAUGHTER] PROFESSOR: Or new enemies, yeah. You can-- AUDIENCE: [INAUDIBLE] diplomacy games. PROFESSOR: Yeah, well-- yeah, diplomacy is great at creating bad relationships. So yeah, OK. So the stuff that you take away from the game makes a lot of sense. The interesting thing about the ante situation is that that almost moves into a third category, which is the stuff that you do between games. Specifically, you're taking out of the game something-- AUDIENCE: [INAUDIBLE] you said that Richard Garfield made that game. Was he the first person to do that? PROFESSOR: I'm not sure about that. But he made Magic: The Gathering. He's the designer of it. AUDIENCE: OK. You said that he was [INAUDIBLE]. PROFESSOR: No, no, no, no, no, no. AUDIENCE: [INAUDIBLE] as soon as the game was created. PROFESSOR: Yeah, yeah, yeah. No, no. AUDIENCE: [LAUGHS] PROFESSOR: Well, I mean, he wrote it into the original rules. And nobody liked that part. Oh, I think very few people seemed to like that part of the rules. So-- AUDIENCE: Yeah, every [INAUDIBLE] been banned for as long as the game's been around. PROFESSOR: Well, OK. But it is original rules. But that's a mechanic that-- it's a mechanic imposed on top of the game. I guess it was part of the game, originally. But now, if it's played for ante, it's being played outside of the game, because it's not part of the rules anymore. Where, if you put your game-- if both of you are putting a card up-- say, a really, really skilled player is going to play a really, really not skilled player. And the not skilled player says, well, chances are, I'm going to lose. But the skilled player is anteing up an incredibly valuable card that I might not, otherwise, be able to get. So I'm going to ante up this crap card that I will always be able to replace if I lose it. But it makes it worth playing that round, and then the game [INAUDIBLE]. AUDIENCE: It's an interesting [INAUDIBLE]. So playing [INAUDIBLE] a game that would have some luck in it, where it actually has a very high [? appearance ?] of luck involved in there. And so, it's really, really easy to play this game and talk about-- and say that you got really, really, really unlucky-- or to say that you got really unlucky. And I have definite [INAUDIBLE] after playing this game. It was [INAUDIBLE], but how unlucky they were during that game. PROFESSOR: Well, OK. I mean, that's something-- yeah, sorry, sorry. Go on. AUDIENCE: Oh yeah, that's just-- oftentimes [INAUDIBLE] people have a tendency to be lucky or unlucky in this game. And they're, like, well, I've lost-- and someone might say, I've lost-- [INAUDIBLE] Oh, I've lost a lot of [INAUDIBLE] games in a row. But I'm just unlucky in all of them. Something like that there. PROFESSOR: Well, something that happens between games-- it's like-- you brought up a couple of things. One is the discussion about how this game seems to be operating. And on Reddit, that tends to be a very angry conversation among friends who have played games of Twilight Struggle, or something. It's a combination of storytelling, which is, man, we're just being able to recollect how that game went, and how unlucky I was. [INAUDIBLE] all the time. Man, that dice just does not want to roll sixes. And I think that's more something that you take away from the game. But if you're going to go back into the game and then understand, OK, I thought this was a game all about logic. Now it's really just luck-based. And then you're going to the next game of the same-- you're going to play the same game again. You're bringing different expectations. So you're adjusting your expectations. So I will put that in a category of something you take between games, in adjusting your expectations. Then what happens during a game, other than the game itself-- stuff that's not in the rules? Yeah? AUDIENCE: There's taunting, or-- PROFESSOR: Trash talking? AUDIENCE: Yeah. PROFESSOR: Yeah, trying to mess with your opponents psyche while the game is going on. AUDIENCE: Also just more metagame stuff-- thinking strategies over in your head, and playing out different scenarios of what might happen. PROFESSOR: Right. Does this person even know about this strategy? I am seeing something that looks like the player might. But really-- are you really going to do that? It happens in card games all the time. What else happens during the game other than the game itself? There are times when gameplay just stops. And I, actually, gave a couple of examples of that. What's the out-of-game considerations that forces the game to stop, or things you have to do to keep it going? Remember any of these things? AUDIENCE: Shuffling a deck of cards? PROFESSOR: Those are usually mentioned in the rules. You reach the end of the deck, and then you have to reshuffle your deck. I meant physical, safety considerations, for instance. Somebody fell down in the middle of a game of tag, or something. And then it's, like, nothing in the rules say that you have to stop the game. But people do, because you want to make sure the person's not hurt. It happens during the game. And metagaming is sort of concerned. But it's not, necessarily-- and the reason why it's important to the game is because you want people to say, hey, I want to play tag again. I want to play tag. But I want to know that I'm safe while I play tag. So if people understand, on the metagaming level, that if somebody looks like they might be hurt, the game will not continue until we verify this person is not hurt or got this person adequate medical attention, for instance. But that doesn't necessarily always have to be like a physical hurt. That can be-- there's a great account in the reading about playground groups fighting over what nice rules meant, and the words that they were using to be able to describe what nice rules meant. And they had become increasingly-- at first, they were just starting with, well, we're going to be playing nice. They call that [INAUDIBLE] rules. And then, people who didn't want to play nice started joining the group to play Four Square. This is not the mobile game, Foursquare. This is the bouncing ball version of Four Square. And then every time they did something outside of the rules, the people who were trying to play nice then had to add in-- get more, and more, and more specific about what nice meant. And that wasn't actually the intent. They didn't actually want to get all that specific. They just wanted people to play nice. Because once you get more and more specific-- because there's so many kinds of rules-- lawyering mode, then the context should be nice. So that's something that's happening during the game. The game's actually still being played. Part of the game of four square is just bouncing this ball around. But if all these rules-- negotiation-- that's happening to be able to try to steer the game into the direction of different-- in this particular case-- different groups of kids prefer to play the game. One of them-- one group wants to be very competitive and winner takes all. The other side was like, the whole point of playing this game is so that we're not annoying other people-- the traditional-- this divide, I guess. So Magic: The Gathering was, ostensibly-- this could be something that he came up with after he had designed the card game, in order to justify why he decided [INAUDIBLE] the card game. Often when you ask a designer or a writer, why didn't you do this, it's very easy to come up with a justification that sounds good. But really it isn't necessarily what they were thinking at the time when they came up with the game. So we have no way of knowing. Even Richard Garfield will not be able to tell you honestly whether that was the case or not, because no one really remembers very clearly what they were thinking when they were designing the game. But it's a good justification, I think, which is, what does a player bring to a game of Magic? Well, they bring their deck of cards and the knowledge about how that deck is supposed to work, because they built that deck themselves. And it's supposed to be this little engine to be able to play our very specific strategy, which is something that they want to try for the next game. They may take the ante away-- but suddenly all of that knowledge about, OK, now I know that my deck is actually a pile of crap, for instance. What happens between games? You assemble new decks. You chat on forums. Maybe new cards get released. And then you have to try to figure out how your deck is going to work against them, and then what happens during the game itself. I don't know, actually, how much taunting happens on Magic tournaments. Anyone play Magic competitively? Is there taunting? AUDIENCE: There's never anyone actually taunting them, like, you suck. I'm going to destroy you. PROFESSOR: All right-- [INTERPOSING VOICES] PROFESSOR: OK, the Magic people are going, you sure you want to play that? You know, that sort of thing. AUDIENCE: I got goaded into that once-- yeah, actually-- being competitive [INAUDIBLE]. PROFESSOR: [LAUGHS] AUDIENCE: There's also a huge backlash right now of people just being animals during games, and just-- PROFESSOR: Well, yeah. AUDIENCE: The tournament organizers are, specifically, supposed to look out for that and try to not do that, because [INAUDIBLE] bring them back to play. PROFESSOR: Right, because they're trying to build up a social community. AUDIENCE: I've already-- [INAUDIBLE] admit that you have something that's-- that you have a [INAUDIBLE] you can finally use. Or if you've gotten to a position where you're about to use an ability that will almost win you the game. And sometimes your opponent will realize that he's unable to stop it and [INAUDIBLE]. And so, I read a comic which was about someone like, oh, my God. I'm going to get to use my play blockers all for the first time. [INAUDIBLE] And he's, like, I'll pay you money if you just want to keep playing [INAUDIBLE]. [LAUGHTER] PROFESSOR: Wow! AUDIENCE: This didn't actually happen. PROFESSOR: But that would be hilarious. OK. All right. AUDIENCE: [INAUDIBLE] to an entirely different level of gameplay. There's a famous story about one of the best players in the game-- forgot to bring the one card in the deck he could win. So he completely controlled the game and then couldn't do anything [INAUDIBLE]. PROFESSOR: OK. AUDIENCE: He won the whole tournament. PROFESSOR: He won the whole-- AUDIENCE: He got every, single person to concede before he had to play [INAUDIBLE]. PROFESSOR: Yeah, sure, sure. AUDIENCE: [INAUDIBLE]. Why wouldn't he? PROFESSOR: Yeah. [LAUGHTER] PROFESSOR: Well-- AUDIENCE: And it brings another level to the metagame, where everyone knew the deck. So everyone knew exactly what he was trying to do. And at the time, where he was just going through the motions, they were, like, OK, sure. I know what happens. PROFESSOR: You know this person has this card. You just don't know that person had the card at the moment. Right? [LAUGHTER] PROFESSOR: OK. AUDIENCE: Just building off of that, [INAUDIBLE] very limited StarCraft higher-up games that I've [INAUDIBLE]. There would be often times where neither base were destroyed. But one person would, automatically, concede defeat. PROFESSOR: That's the majority of StarCraft games, actually. AUDIENCE: I never understood it. But they're not still going. [INAUDIBLE] Their economy was completely destroyed. PROFESSOR: Yeah. AUDIENCE: Well, in that game, you can lose [INAUDIBLE], but not, actually lose [INAUDIBLE]. AUDIENCE: You can stall the game for a very long time by just running around building pylons across the map. [LAUGHTER] PROFESSOR: It's a game full of positive feedback loops. So it's one of those things where, once you feel that things are snowballing, if you are right that things are actually snowballing out of your control, then yes. There's no way to actually win. But yes, you can draw it out. But theoretically, there really shouldn't be a way to win, because it's a game of positive loops-- of positive feedback loops. AUDIENCE: Some people quit prematurely. PROFESSOR: Yes. AUDIENCE: Adrian was famous for rage quitting and commentators [INAUDIBLE]. PROFESSOR: Did he just rage quit? AUDIENCE: It was pretty even. PROFESSOR: Yeah. AUDIENCE: He could still come back. PROFESSOR: And that's the thing. If you are wrong, and you are not, actually, in a bad situation-- if you are actually pretty even-- then, because it's a game of limited information-- imperfect information-- you don't know exactly how bad off you are. AUDIENCE: On the [INAUDIBLE] the person not having their [INAUDIBLE]. So that's-- sometimes you-- [INAUDIBLE] you make them play it out. I want to make sure that-- [INAUDIBLE] if the person didn't know how to win. They had it. But they just didn't know how to use it properly to win. And so [INAUDIBLE], except for one person who's like, I'll make them play that. But then they couldn't do it. But one of my friends was playing [INAUDIBLE]. And he had-- he was in a situation where he could make arbitrary many copies of his card. And every time he made-- his copies would last for one turn. He'd [INAUDIBLE] And so, he ended up-- [INAUDIBLE] made him go through [INAUDIBLE] and said, [INAUDIBLE] and click, and actually make 50 copies of his card. He just spent almost 10 minutes going through and making copies of this card. And if he just clicked-- if he made a specific [INAUDIBLE] that was easy to make, he would just lose the game. So-- PROFESSOR: So, yeah-- so execution error-- I mean, that happens even in real-time games like StarCraft. Yes, you should not be able to win this game unless your opponent makes a huge mistake. And if you're-- that happens, by the way. AUDIENCE: There's another interesting thing that happens from hidden information in League of Legends, where, as a spectator, you can see how much gold both teams have total. But you can't see, even, how much gold your own team has total, when you're actually playing. You can only see number of kills, and maybe number of towers, and relative farm counts, basically. But you can't actually tell, oh, we're actually up in gold. So some of the interesting things will happen where a team can be down eight kills or 10 kills or something, but still be even or better in gold just from having gathered different objectives and more [INAUDIBLE] and stuff. PROFESSOR: And I think bringing this to a point of design, whether you allow players to concede the game in progress has, actually, been something that's been actively enacted in DotA, and Moebius, games like League of Legends and in DotA 2. DotA 2, in particular, for a long time-- and I'm not sure if that's still the case-- actually did not allow you to concede the game in progress. Can anyone-- does anyone play DotA 2? Anyone confirm this? AUDIENCE: Once. I think it still doesn't allow you. PROFESSOR: Still doesn't allow you? AUDIENCE: I haven't played in awhile. PROFESSOR: OK, it's definitely one of those things that struck me as, what? And it's because of these reasons. It's because-- just because you think that you've lost the game, doesn't necessarily mean that you lost the game. And the designers decided that it's more important that you fight it out to the bitter end than to concede. But of course, just because you can't quit the game, doesn't mean that you can't quit the application-- [LAUGHTER] PROFESSOR: --or [INAUDIBLE]. Huh? Yeah? What? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, [INAUDIBLE]. Or you could just choose not to do anything-- deliberately walk into your opponent [INAUDIBLE]. Let's see, your hand? AUDIENCE: The reason-- one of the reasons that I don't play DotA is because they don't allow you to quit. It's irritating to me when the game is clearly over for me. PROFESSOR: Yeah, I mean, it's a good 40-plus minutes per game. AUDIENCE: Yeah, it can take a long time to finish out a game. PROFESSOR: Yeah. AUDIENCE: And I want go do something else, or I just want to take a mental break. And they won't let me. AUDIENCE: I watched the presentation on [INAUDIBLE] last week in another-- in a class I'm taking. And I believe a lot of the justification for that was also that it irritated people who were allegedly more serious gamers, because there are all these people just casually playing the game who will just give up. They ruin it for the rest of us who want to keep playing. PROFESSOR: Yes. They are the ones who feel if you are slightly behind you can still fight back. Whereas, people who just play this game to chill out. They're just like, oh, I don't feel like fighting back right now. Let's start a new game. And if they had the ability to concede they will do that. AUDIENCE: I was just thinking that don't the most serious of gamers join with the team and they're on the-- GUEST SPEAKER: Yeah, absolutely. And even when you're playing by yourself in these games, high level players don't concede, because they know can come back. PROFESSOR: But it's a spectrum, right? You don't start off by being serious. You start off by being sort of, hey, what's this game all about, and end up this middle tier-- well you're still playing with strangers, but you are getting really interested in doing well. And right now Todd Harper, which I was describing, is actually thinking pretty hard about what is the value of having taunt as a game mechanic. Fighting games have that action which deliberately wastes time and opens you up to attack, right? But the whole point is that you will do that in order to-- It's the digital equivalent of trash talking. I'm going to make this move, deliberately making myself vulnerable. Because I know I cannot lose this game, because you are so bad. Or I think you are so bad. I think there's things like that in League of Legends although I don't know about the others. And so the question is, what is the real value of that? You can already taunt. Especially if you're playing something like an online game on the computer, typing up a taunting comment takes actions. That means that it takes your fingers away from actually playing the game, because you have to type these words out. And as a result of that, it's already a drawback to be able to do that. Why do you need a game mechanic? Which means that game designers are explicitly supporting you doing the taunting. AUDIENCE: I think that actually, at least in League, when you have a character you have four. I think it's joke, laugh, taunt, and dance. But actually people just do it for fun. For example, with high ELO streams sometimes all 10 people will group up and have dance parties, in close agreement to not fight and just goof around. Other times it's really funny. I saw this stream where Sion was getting jumped on by two people and decided to dodge a skill shot by doing a taunt, which is taking his axe and going like this. And then dodging the skill shot, and then just walking out. So that's just funny little [INAUDIBLE] PROFESSOR: Right. In Starcraft you can make your characters dance just by clicking back and forth real fast. Then they just walk left, walk right, walk left, walk right. And I've seen that and other game mechanics that aren't explicitly taunt mechanics. To be able to tell the other player, no. You've basically lost. You should just concede now. Or even if you don't concede, I'm going to win this game anyway. So you might as well concede now. But then they also have a slash dance command to actually make your characters dance. And that's a game mechanic, right? When you fight slash dance you do no damage, and you will take whatever damage is being inflicted on you deliberately. And I've seen a couple of games lost because someone decided to type slash dance. And that's the risk. But again that's the game explicitly supporting this this sort of behavior. It's supposed to be meta. It's supposed to be the sort of thing that happens above the game. We're going to be playing a bunch of games. Obviously these are board games. The talking, the communication, what's going on in this game? And what are you going to take out from the game? It's all happening above the board as opposed to inside the game. Except for Space Alert. Maybe Space Alert it is all just part of the game. I want you to be thinking about this, about what's happening above the table. about what it's about. Information they are exchanging or interactions they are having that's not part of the game itself. So what does the game explicitly do to be able to support social interaction? To be able to either support the kind of social interaction that's happening inside the game system, or to be able to draw on the social relationships that you have with other people on the table to be able to enhance the game experience. Did you have your hand up? AUDIENCE: So in League, theoretically it can be real-- there are people who often-- The enemy should not just try to use their abilities on me because I taunt them. There has been a case. I just sit there taunting and then their enemy uses all their abilities on me. And then my team ends up winning the fight, and the record off of that. Sometimes taunting someone will have an actual impact. PROFESSOR: But you could arguably have achieved the same goal without and explicit taunt, right? You know, just by standing right in front of your opponent. AUDIENCE: In your taunting you're telling the opponent that you're not really paying attention. That you could also be attacked. You're less likely to dodge that move. PROFESSOR: True. Something to keep in mind when you're designing your games is that there are all these things about how it would be fun to be able to design mechanics to be able to support these kinds of social interaction. But you got to be very, very careful. Right? And every time you design something you're both giving your players a tool to do something else and it's now officially mandated by your game. Because you have a rule that actually says that this is something they can do. And they may use that rule creatively. Or they may use that rule destructively. Again it all comes down to testing. You don't always know, but you should know what you're trying to achieve with the game socially when you're making a game. You should have that conversation today when you finally break up into your teams. But for now what we're going to do is just play a bunch of board games But primarily about social play. And then on Wednesday I think we're going to bring you [INAUDIBLE] games with very changing rules. John, you want to talk a little bit about these games that you brought? GUEST SPEAKER: You want to do it right now? PROFESSOR: Yes. And then we'll get all the games out and have a break simultaneously. All right, I'll start with this one. Has anyone played this? OK, maybe you could help me explain it. GUEST SPEAKER: I watched two hours of videos and I don't really get it. There is really a lot to it. There is a lot going on. But the gist of it is that everyone gets to pick a role. And each role pretty much stems straight from the TV show. Have you guys seen Battlestar Galactica? It's a really good show. It's on Netflix. You should go check it out. And actually it seems like the consensus on the board game community online actually really like this game. So it seems there's a lot to it. But essentially you choose a character from the game, and they have different powers. Some of you are human and some of you are Cylon. The Cylons are robots that look just like humans. So there's spaceships coming to get you, and some of the Cylons are on the ship. And their goal is to basically wipe out humanity. And the human goal is to survive and you don't know who is who. There's a critical moment in time where the Cylons will reveal themselves and get even better powers. So there's lots of cards. So there's a military leader and a political leader. And they each get different decks of cards. If you're the political leader you basically can put people in the brig. There's so many components. AUDIENCE: It is, I'll say, the most complicated game that I've ever played. And this is true of many replays. GUEST SPEAKER: I think you have to find it fun, because after you invest all that time to learn how to play it, you're not going to not play it. AUDIENCE: A lot of the rules are sort of unnecessary. You can literally clear 90% of it. And the end of the game has all these weird rules. It has too many rules, but it's still fun. GUEST SPEAKER: Yes. So depending on who you are, you can either be a military leader, a political leader, or you could be a pilot. The political leader gets to put people in jail. The military leader gets nukes, and they get to strategically use them against the enemy. And whenever there's a decision that has to do with fighting, they get to take the lead on that. So there's different roles. And it's interesting that each role has different decks of cards that support that role. The turn is really complicated. It's a bunch of sets. You draw a card, and then you collectively face an objective, a crisis. And you all have to vote for commit resource cards, of which some can be good and some can be bad. So the Cylons try commit bad resource cards to make sure that this objective is not completed. And the humans tend to put in the good ones so the objective is completed. But there's a lot of meta-game where the Cylons don't really want you to know that they're Cylons. And then there's special abilities where you can flip a card up and see what everyone's resource card is. I think they're called skill cards. It's a cool simulator. And you can jump. There is so many components. We probably don't want to play this game, because I think that the play-through is two or three hours. But all the [INAUDIBLE] you might want to change. PROFESSOR: It's a great game to know about. It's a great game to play, if you've got a full weekend dedicated to board gaming or something like that. GUEST SPEAKER: And there's expansions, too. This is a pretty popular game. AUDIENCE: I think we can play this in two hours. GUEST SPEAKER: I think you'd have to leave it, though, to explain it to everyone. PROFESSOR: I want to make sure there's enough time for teams going to help you to verify [INAUDIBLE] So there's probably not enough time. But I ask you to do that so that people will get a sense of what is involved. Also look at it a bit. One of the nice things I like are these little counters. It's just a counter. Five, four, three, two, one, 0, which means you have normal pod condition. But it's a nice way to be able to keep track of stats. GUEST SPEAKER: Is that OK for that game? PROFESSOR: Yeah. That's great. GUEST SPEAKER: This game, Space Alert, has anyone played this one? It's really cool. I think this game is really neat. Maybe I can talk about the board? It's this cooperative game. And you're playing as a team. You're this crew on a spaceship there. And it goes like this. You have 10 minutes to prepare a hyper jump. And during those 10 minutes new information is coming in from the audio CD that you play. There's 16 different tracks, so there's a lot of replayability. And at the end of the game, there's this frantic playing phase. And there are going to be periods where you can't talk to your teammates. There's periods where you can trade cards. And basically there will be these trajectories, depending upon which enemies. So there's the red part of the ship, the white part of the ship, and the blue part of the ship. They each have an upper deck and a lower deck. And there will be enemies coming at you from space. And you man your shields, but also use energy to shoot the objects down and basically survive. And if they do six damage to any one part of your spaceship, then it's game over. When you take damage you get certain debuffs. Your shields would become weaker. So you have little avatars that move around on the board. And you need to be in each room to do your action. So there's an A action, a B action, and a C action. So just taking a look at the white one, you all start here. The C action is the computer, so the first two moves of the game. I think there's 12 moves, and there is three sections. And the first two moves of each section you have to have someone be the computer. This is the gun and the shields. The gun and shields are powered by green cubes that go down here. And you have to constantly be replenishing them. There are also internal threats. Creatures will crawl into your spaceship, and you have to get these bots up and running to take care of them. There is some really good tutorials and play-throughs on YouTube. So if you don't get a chance to play, you should definitely check it out. PROFESSOR: The play-throughs on YouTube are probably pretty hilarious, I'm going to guess. GUEST SPEAKER: They're amazing. I saw this solitaire one where this guy was playing solitaire. It's a really good way to learn the game. PROFESSOR: Playing the solitaire is probably a lot like playing FTL, if anyone's played that on iPad or a computer. GUEST SPEAKER: It reminded me of FTL quite a bit. PROFESSOR: You imagine FTL where every single member is actually controlled by different players, then you start to get several more people there. So you put these action cards down, but you don't actually resolve the game until after 10 minutes are over. There is different difficulties, so you can put in harder cards. There is a lot of details that I am missing. AUDIENCE: The primary one is that if you've played RoboRally, or any other kind of game that's got a little bit of a program event to it, you're basically programming your character to do these things. You're kind of figuring out and visualizing where things are going to go. And you can talk about what you are going to do, and what to do at specific turns. But you won't actually know how it actually runs out until after the 10 minutes is spent. And a big part of the fun is just flipping over the cards, seeing what people actually did. Realizing that oh no, wait, we both clicked the same button at the same time. GUEST SPEAKER: Or you're in the wrong room. You think you're in one room, but you're in another. And you're clicking the wrong thing. Or two people take an elevator at the same turn and it will actually delay one person's actions a bunch. So all your cards are laid out and that's your plan, but then a delay happens and you have to shift all your cards to the right. AUDIENCE: It's a game about coordination where every single mechanic in the game is to prevent that coordination from actually happening. GUEST SPEAKER: If you listen to the tracks, there are periods where you're not allowed to communicate, which can be really frustrating. And it's constantly throwing new monsters at you from all different directions. So managing where the monsters are coming from, where the threats are coming from. These are the cards that either move rooms, either to the red side or to the blue side, where you can hit buttons. And you only have a certain amount of these in your hand. And you have to coordinate with your teammates at a certain point to get the action that you want of your team. Is there anything interesting? AUDIENCE: Pandemic. GUEST SPEAKER: This is a co-op game where everyone on your team is playing against the board. You are playing against the computer. So you're all a team and basically there's four deadly virus outbreaks all over the world and they're spreading. And you need to research the cure. There's different roles, which is cool, because in co-op games we have roles. So there are five roles. So at each turn you pick a couple cards. And basically there are cards. There's cities all over the map. And your little guys go all over the world for treatment with these, set up research centers, research cures. And you need a certain amount of cards, like five of that color. And you have the role of your researcher, you only do four. So the medic can cure people faster. There is a logistics guy that can fly people around the world with ease. And you only have so many actions. And constantly there will be breakouts where if you get too many diseases in one area, it will cause the neighborhoods around it to be infected. So you have these chain reactions. Again, coordinating well with your team in a cooperative manner for a cooperative gain, which I wasn't able to do. We lost a bunch. PROFESSOR: It is a tough game. But the more you play it, definitely the more you start to get used to how the systems work, especially the explosions, the virus explosions. Forbidden Island is same designer. And I feel the rules are a little easier to understand. It's not actually all that different a game. We'll also bring it out for you to check out. And these three games over here? Shadows over Camelot. The Wrath of Ashardalon. We're here primarily so that you can take a look at that them, but we don't have enough time to play games in class today. Shadows over Camelot is basically, is sort of Arthurian traitor mechanics. Ashardalon is actually just Dungeons and Dragons, only packaged into a nice little-- You don't have to think too hard about creating your own character ahead of time. So you can just jump right in and get into the playing part. It's kind of interesting. They take a chunk of the meta out. The whole, how do you create a character for instance. Which a lot of people do enjoy Dungeons and Dragons, but it does take an awful long time. And how do create a campaign? Well, the box comes with the campaign and all the maps that you need. So this is basically one box, a Dungeons and Dragons campaign. That's going into dungeons and killing monsters and getting loot. I think we just played this one. AUDIENCE: [INAUDIBLE] I just played the expansions-- [INAUDIBLE] fade in and out of reality. [INAUDIBLE] jump between different zones. PROFESSOR: I could see that. AUDIENCE: If there is-- there is [INAUDIBLE] PROFESSOR: A lot of expansions are designed for people who've mastered the original game. Because theoretically, if you play the game often enough and you're coordinated enough as a group, you should be able to beat everything that the game throws at you. Same goes for Pandemic. The question is that coordination part and some luck of course. All the audio tracks will also be available on MP3. If somebody wants to play Space Alert, I would suggest actually taking a corner of the room, maybe that one. Because you need to be able to hear what's going on. Actually, could you just plug it in there? AUDIENCE: It's pretty loud. If your computer has a CD drive you could do something over there. GUEST SPEAKER: I feel like the tracks are designed to disorient you. PROFESSOR: But you need to be able to hear what's going on. So it needs to be loud enough that everyone around the table can hear on top of all the talking that you're going to be making in the game. So give it a try just on the computer speakers. If we can't hear what's going on, then we'll try it again with other speakers. We should definitely be able to hear. Definitely try to pick a corner where the noise of the other games isn't going to bother you. Also, just take a look at the parts. AUDIENCE: Two to four players for Forbidden Island, it takes about half an hour. Two to four, for Pandemic? GUEST SPEAKER: For Pandemic two to four. AUDIENCE: And then this one, one to five. [INAUDIBLE]
|
MIT_CMS608_Game_Design_Spring_2014
|
13_Cybernetics_and_Multiplayer.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right. So today's reading comes from a game designer, Lewis Pulsipher, who traditionally worked on war games. I believe he's done some of the board games, as well. But for the most part, I think, he's recognize for his war gaming work. And there's actually quite a lot of war gaming journals, mostly written by designers for fans or by fans for fans. Things like the [? Compleat ?] Strategist used to be like the PC Gamer of war gaming, right? It's like, this is what's happening and these are our thoughts, these are our critiques. And that particular reading I feel is very typical of that era of, maybe, late '80s, early to mid '90s magazine publications. And it's interesting because already at that time, sure, there are people who, say, wrote the core rulesets for various games, like Axis and Allies, or things like Dungeons and Dragons. But there was going to be this assumption that if you are interested in something like war gaming, you're also interested in creating your own scenarios. You are already halfway a game designer. And so you have to keep all of these things in mind. Particularly if you're a game master, has anyone been like a dungeon master or a game master for a role playing group? OK. You're basically a game designer. AUDIENCE: Yeah PROFESSOR: Yeah. You know, you're taking rule sets that somebody else [? came up with ?] and cherry picking the stuff that you really want to work with and discarding the stuff that doesn't and then coming up with your own rules. So just as and example of reading, I believe this is the first time I've asked you to read anything from the table top book? AUDIENCE: [? Tic a Tac. ?] PROFESSOR: [? Tic a Tac? ?] AUDIENCE: Yeah. PROFESSOR: OK, all right. AUDIENCE: You just ran this first. PROFESSOR: That's true. Yeah. But this particular piece of reading is also a nice little time capsule of how people used to write about game design. It's a fairly modern piece of writing it still has that same sort of style. So there's a lot of examples of, like, this is how this theoretical game is going to be played and I'm going to write it out completely in prose. And even when they refer to games that already exist, like, I believe they do cite [? Vinci, ?] at one point. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, again the game mechanics that they're discussing are all in prose. It's not like in point form or anything like that, it's just free prose. I don't think there's ever any assumption that if you contributed an essay for something like [? the Compleat ?] Strategist that diagrams that you included would be included in the magazine, right? Because a lot of people who actually created those essays weren't necessarily artists. And that's not to say that you didn't have diagrams in magazines like that, but it was expensive. This was before desktop publishing. So people just got really, really used to describing things in prose, describing core game mechanics in prose, not just in point form or bullet points. So there were basically two suggestions that he has on how to deal with the whole three player problem. Actually, before I get into it, can anyone quickly sum up what's the main problem with designing a game for three players? Your answer. AUDIENCE: Well, one of them was [? turning-- ?] PROFESSOR: Mhm. AUDIENCE: --where two of the players sort of go at it for most of the game and then they weaken themselves, and then the third player [INAUDIBLE] PROFESSOR: OK, so, for a certain set of rules, that's one kind of strategy which encourages you to basically not interact with anybody else and then win the game, right? AUDIENCE: What I going to say is, this is where it's just one player chooses which of the other two players wins. PROFESSOR: Mhm, so I believe [? kingmating ?] was the name that was given to that. You may not be in a position to win the game but in a game with only three players, you might have enough influence to basically say, I don't like you, therefore you are not going to win. Therefore, the other person is going to win, even though I can't win. There were a few more subtle problems that he described. Anyone remember any of them? [INAUDIBLE] Yes? AUDIENCE: Politics, so two people decide to team up on the other player and the other player really can't win. PROFESSOR: Uh-- [INTERPOSING VOICES] PROFESSOR: --yeah, he brought that up. It does come up in reading. It's not necessarily described as a problem, but something that does happen and it happens often. So you either take advantage of it as a designer, or your game suffers because you hadn't thought about it, right? This whole idea of, somebody's in the lead and two people probably want to do a temporary alliance at the very least to take down that player. There was one called sandbagging and I actually am forgetting what that means here. AUDIENCE: If we could set that up, saying that you're [? doing ?] less well off than you actually are. PROFESSOR: Oh, yeah. Pretending that you are-- like bordering on lying, right? It's like, no I'm not going to win the game in the next turn, don't worry about me, deal with him, he's scarier. And that sort subtle deceit where you have to actively conceal the fact that you are not in the lead. There are some games that say, all right, here are some ways that you can do that. I'm not sure, are [? we enough ?] for Crunch? I think we are. [INTERPOSING VOICES] PROFESSOR: Yeah. So Crunch is a game about being a banker. You're trying to fill up your golden parachute before the market crashes, and you are encouraged to play the game in a three-piece suit so that you can actually actively hide cards. And the idea is that at the end of the game, the amount of money that you've managed to keep yourself for your own personal use, including any cards that you happen to [? have ?] on yourself, will win you the game. It's not about keeping your bank afloat, at all, it's all about leaving with the largest personal fortune. So it's a rye, satirical game, but one that actively encourages you to conceal how well you're performing. Now, it's a two-player game, it's not like a three-player game, so that you are also well aware that your opponent is doing the same sort of thing. You're just trying to do the best that you can in accumulating as much cash as possible because that's what's going to win you the game. But in the situation with three players, you can imagine a situation where someone could figure out that you're in the lead and that person can catch up. And what can you do? Like, pass cards to the player that you want to win, for instance? And that might be tough because someone might not be able to interfere with that process. AUDIENCE: In the game [? Junta ?] you're a politician in some Latin American banana republic, and your goal is to embezzle as much foreign aid money as possible. And one of the things is that, the number [INAUDIBLE] drawn each turn is variable, and only the president really knows, and so [INAUDIBLE] is that sometimes the president won't draw very much, which might cause people to launch a coup against him, because they think he's lying and taking a lot for himself and there scared he'll win for that reason. But, basically there's so much uncertainty about how much people have. It's not even computable how much people have in their-- PROFESSOR: It's Junta? With a J, right? AUDIENCE: Yeah. PROFESSOR: So, [? this ?] [? is ?] [? be, ?] right? It sort of plays around with this idea that you have this dictator that seems to be amassing a large personal fortune, whether or not the dictator can be successful in a process of amassing a large personal fortune, that game is designed to paint a giant target. And we have another game here called King of Tokyo, which is basically giant monsters trying to take over Tokyo bay. Every sing generic giant robot, giant dinosaur, octopus thing, mecha dinosaur. And whoever happens to be a king of the hill at that point has a giant target on them, and the game mechanics all support that, that's what the game's all about. So those are some of the problems and he proposed two solution, not necessarily the only two solutions, but these are the ones that occur to him, particularly in war gaming, that might work. And they're actually kind of opposite ends of the spectrum. Does anyone remember one of the two solutions? AUDIENCE: Equilibrium? PROFESSOR: Mhm. [INAUDIBLE] AUDIENCE: If one person gets ahead then they cannot be significantly defeated before they win the game. PROFESSOR: I think equilibrium was the opposite of that. AUDIENCE: Opposite? PROFESSOR: Yeah, so you've actually just completed two answers into one, which is good. [LAUGHTER] So, the equilibrium, if I recall correctly from the reading, the equilibrium design goal method that you can manage to design is taking advantage of things like the balance of power theory, which is, if there is a person who is clearly ahead, everybody else who is weaker is probably going to band up and take you down, and if you just balance the game in a certain way you give them the tools to do that, and that takes anyone who seems to be ahead back to a sort of equal power level. That's a balance of power theory is something from political science. I've only heard about it in passing. If there's anybody here who has taken political science classes, you can tell me if I'm getting this right or wrong. The whole thing is not a far fetched idea, right? If you've got a powerful nation other nations who are bordering that powerful nation are likely to either team up with the powerful nation, or more likely to band their [? nations ?] together, even though they may not be ideologically linked or anything. They're ideologically linked against the powerful nation, so same thing goes for players. So basically it comes down to giving everybody the tool to take down the leader a notch, and does restore everything to a sort of equilibrium state. And then your success criteria for the game probably isn't who is going to gain such a massive leap, because the game isn't deliberately really designed to wait for everything to equilibrium, you need to have some other kinds of success criteria. Maybe the game is time limited, for instance, maybe the game has different goals for each player, or something. So even though your power levels are about the same you might be able to get victory points on some things faster than others. The alternative, which you hinted at, which was, basically, making it difficult to take down the person who's in the lead once they're already in the lead. He had one very specific implementation of this, or one particular rule of thumb, on how effective must a person in the lead be in order to prevent the king making, and sandbagging, and [? turtling, ?] and everything. Remember this? Remember? The basic idea is that you must be able to do it in one turn, in a single turn. If you are in the lead, you should be able to complete the game and win the game in a single turn. I'm not so sure whether I agree with that. And I don't think he means win it on your turn. I think it means in a round of people. If you're in the lead in the game, you should be able to do something to cement that lead so that within that turn no one's going to be able to do anything about [? changing ?] the [? turnout of ?] the game. This is for games where there is a clear win condition that is based on overpowering your opponents. That you don't want to have the situation where, yeah, I'm in the lead but everybody else still has enough time to take me down to the point that somebody else can then catch up and be in the lead, because that basically means that the first person to get in the lead will probably lose, and that's sad. But we see that in certain computer games. We've seen games where whoever who is in the lead has a giant target painted on their back and you're expected to be taken down. If you're the first one in the lead you are probably going to get hit, but then that now [? there's ?] enough time to be able to pull ahead. So that's closer to the equilibrium style, right? So I think I heard someone give-- AUDIENCE: Oh, Mario Kart. PROFESSOR: Mario Kart being the obvious one. You get increasingly powerful weapons the further behind you get, or useful things. There's an example in this book here, the Rules of Play-- this is the hardback version-- of Super Monkey Ball. And, basically, all their weapons face forward. So again, number one, you can only shoot forward, there's nothing to shoot, there's nobody ahead of you. It's not actually a technical disadvantage, it's not like a weapon that slows you down, but it's useless thing. You get these powerups that are useless. So a lot of these feed into a different way of looking at games that is a very formal, very [? technique, ?] and one day you've probably heard of, mentioned multiple times in this class and in other classes, as well, and that's just feedback systems. I don't care about UI feedback, I'm talking about feedback from the cybernetic [? sense. ?] Anyone here has heard about cybernetics from another class? Yeah. AUDIENCE: Oh, I was going with the feedback systems that you used in your other class, like we did [? in the engineering one. ?] PROFESSOR: Yeah, yeah. I definitely brought it up in there. AUDIENCE: Yeah, where you had a forward feedback system and a backward feedback system, and, basically-- PROFESSOR: Mhm, positive and negative. Yeah. AUDIENCE: --If you were winning, you either start winning by more, or your losing, you start to catch up. For example, [INAUDIBLE] PROFESSOR: Right. So, the whole basic idea of cybernetics is based on, sort of, automated control. You create systems, mechanical or electronic, or with a set of rules that basically are going to either accelerate or snowball the performance of the system, or bring it back to equilibrium. If this sounds very familiar to something [? Mech-E ?] or [? EE ?] or something, that's completely expected because this all originally came from that. So, the basic model is that you have some sort of sensor, something that detects an environmental change. It could be something that detects the state of your game, some way of telling you this is the current state that you're looking at. Then goes to a comparator-- which needs more space because it's a longer word, comparator-- which performs some sort of logical assessment on the state of the system that you're looking at. And then you're going to an actuator, which performs some sort of action. And it's called feedback because the thing just keeps feeding back, all right? Real life systems that do this, what are some of the simplest real-life systems that do this, electronically or mechanically, without human intervention? AUDIENCE: Washing machine. PROFESSOR: Washing machine? AUDIENCE: Thermostat. PROFESSOR: Thermostat. Washing machine? That would watch water levels I think, right? AUDIENCE: Well, the [INAUDIBLE] PROFESSOR: Oh, yeah, yeah. That's true. So, thermostat, things get too hot, turn on the cooling system or shut off the heating system. AUDIENCE: Wonder if that emergency [INAUDIBLE] PROFESSOR: Emergency? AUDIENCE: [INAUDIBLE] There's a hole at the top of sinks where if the water-- PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. The sensors, whether the water has actually hit that hole. [INAUDIBLE] I guess the sensor is the level of the water, the comparator is the height of the hole, and then the actuator is if it's at the height of the whole, go out the hole, right? So, it's actually just like physics is the entire system for you, but it's engineered in a way to take advantage of that. AUDIENCE: Systems like autopilot. PROFESSOR: Mhm, keeping you on the path that you're supposed to be on, always just doing minute little corrections. The majority of these things are real-world applications of these negative feedback systems. And the idea is that if there is a change in one direction, pull it back. If goes in the other direction pull it forward so it always hovers around the same point. These are deliberately designed systems trying to do this all the time. But there are a lot of real-world systems that do the opposite, right? Stuff that snowballs. For instance, a snowball rolling down the hill. As it goes down it gets larger and it makes it slightly easier to continue rolling. I think some of the area in contact to snow actually decreases by a proper ratio [? required ?] to make it easier to keep going. It picks up more snow and gets larger and larger. Another positive feedback system in the real world? AUDIENCE: I don't know the name of the game, but there's this [INAUDIBLE] game where you start off as an asteroid. AUDIENCE: [? Astrobrah. ?] AUDIENCE: Is that what it is? PROFESSOR: Astro-what? [INAUDIBLE] AUDIENCE: You start off as a small asteroid, and what happens is you meet other small asteroids and you grow up to be a planet, and then, at that point, asteroids become fairly easy to gobble up, but you want to go after other smaller planets than you, and then [INAUDIBLE] and then keep on getting bigger until you become a black hole. PROFESSOR: Hm. AUDIENCE: And then, eventually you get so large that you can just eat everything up afterwards. PROFESSOR: There are two games that reminds me of. One is Orbital, which was a Nintendo game. Do you know? AUDIENCE: Wait, which one? Which game were you saying? AUDIENCE: That sounds like Katamari Damacy. PROFESSOR: It sounds like Katamari Damacy. AUDIENCE: I think that sounds like a genre. PROFESSOR: Yeah. AUDIENCE: So, get bigger so you can get bigger. PROFESSOR: Yeah. Let's see. What's the game done by Andy Nealen? AUDIENCE: [? Orthos? ?] No. Osmos. PROFESSOR: Osmos. Yeah, I think that's a PC and portable device game. So, Osmos is an interesting situation because yes, the bigger you get, the bigger you can get, because the idea of Osmos is like you're just a little amoeba-like blob, so asteroid, amoeba-like blob. You can eat anything smaller than you, and everything that you eat makes you bigger, so you can eat even bigger things. But actually it makes it more difficult for you to navigate because you're more massive now. So you actually have both negative and positive loops going on at once. AUDIENCE: I have another possible one. PROFESSOR: Mhm? AUDIENCE: Bombs. PROFESSOR: Bombs? AUDIENCE: Yeah, because then [INAUDIBLE] exploding and-- PROFESSOR: It makes other things explode? AUDIENCE: Well there's that, but there's also some bombs where-- AUDIENCE: Nuclear bombs. AUDIENCE: Yeah, nuclear bombs. PROFESSOR: Oh, so like you're using an atomic bomb to set off a hydrogen bomb, that sort of thing? AUDIENCE: Something like that. PROFESSOR: OK, yeah. Positive feedback often happens in military conflict, right? If you've got overwhelming forces every engagement just makes the ratio, if everything goes according to military doctrine, every engagement is going to give you yet a greater advantage. [INAUDIBLE] So, the way this has been applied in games is-- just to make it literal-- you have some sort of state of the game. Am I getting this right? Yeah. You have the state of the game. Wait, hold on, let me just make sure I've got this perfectly right. Here we go. You have some sort of scoring function. It doesn't have to be the literal score, but it has to be some way of assessing how close you are to some sort of goal. So either the short term goal or the overall goal of the game, some sort of scoring function. And that goes into some set of [? prior ?] input. There are some versions of this diagram that separate this into two things, that's kind of like the player decides what they're going to do and then actually executes the input. Actually, there is one more stage, sorry. There is one more stage, and that's the kind of the mechanical bias. So the state of the game can be revealed in some sort of scoring function. A player assesses what they want to do and then tries to execute it. That is filtered within the computer's or the board game's or the card game's interpretation of what that input is going to do, and then that feeds back into game state, right? So for games like Mario Kart, you perceive that you are doing badly because you are the last one at the back of the races. You drive your vehicle into one of those rotating boxes because you're deliberately steering to them because they're probably your only chance to be able to catch up again. You get usually something really, really sweet because you're all the way behind and the way the game is designed, if you're all the way behind you get all the best toys. Every single tool that you need to be able to catch up is given to you when you are all the way at the back. That's the mechanic bias, all right? It gives you the right tool and that affects game state because now you have this wonderful tool and you use it, hopefully it works, and then you gain a couple of places up in the race. So, there are a couple of other examples that are a little bit clearer. So, say, basketball. If you've got a situation where you've got five people on each side, you could do a game where if you start to have a lead of five points, you lose a player. Someone's benched. [LAUGHTER] So is that negative or is positive? AUDIENCE: Negative. PROFESSOR: That's negative, right? Because you're saying, all right, you are far enough ahead, we're going to take away one of the advantages. Is there another way to do negative feedback on that example? Instead of taking away a player? AUDIENCE: Add another player to the others. PROFESSOR: You could add another player to the other side. That's also negative feedback, even though you're adding something, what you're really doing is you're going to give an advantage to someone who's not doing so well, and that's trying to bring things to equilibrium. The positive feedback version of this would be-- [LAUGHTER] [INTERPOSING VOICES] What? AUDIENCE: Turn the three-point line into a four-point line [INAUDIBLE] PROFESSOR: Oh, OK. All right. AUDIENCE: Every time you score a point the other team loses a player. PROFESSOR: Yeah, OK. [LAUGHTER] AUDIENCE: Do you ever play basketball? [LAUGHTER] PROFESSOR: I was thinking something more literal like just giving the team that's ahead extra players. Right, that's right. You could do it, every five point that you get, you get one more player. Then pretty soon you've got two more players, then pretty soon you've got three more players, and then-- AUDIENCE: Pretty soon you have too many players. PROFESSOR: --and then you just have more people than anyone can move on a basketball court, and as a result of that you kind of win by default. AUDIENCE: Some game have a similar mechanism, like, somebody's on fire, they make three shot in a row, and all of a sudden they're just way better at playing. PROFESSOR: They run faster, they jump a little higher. Yeah, and the computer games can totally do that. AUDIENCE: [INAUDIBLE] [LAUGHTER] PROFESSOR: Well-- [INTERPOSING VOICES] AUDIENCE: Being on fire is the same as momentum. PROFESSOR: Yeah. Yeah. AUDIENCE: Statistically, if you're throwing a good game and you're doing more and more pitches, better and better pitches, you are playing better. PROFESSOR: But I'm wondering whether that's correlation or causation. Is the reason why you're doing better because-- AUDIENCE: Right now they're thinking it's causation. PROFESSOR: It's causation? AUDIENCE: They thinking it. There's a new study that just came out. PROFESSOR: But it definitely comes up in sports psychology, right? You're sort of building on the mental momentum of doing well. And similarly a team that's not doing well can lose hope. Again, that's positive feedback, I just want to be extremely clear about the use of positive, if there's anything that amplifies differences that already exist. So, there are a couple of things to think about when it comes to feedback, especially as a problem solving technique for design. We've already talked about how negative feedback tries to bring things down to equilibrium, so it's a stabilizing force. If your game is out of control and unpredictable and chaotic, negative feedback can rein things in a little bit. Sure, it may be unpredictable, but it's not going to hurt me so bad. Or if somebody gets and advantage it's not going to be a runaway advantage. Similarly if your games just actually always hovering at equilibrium and nobody seems to be getting an advantage, then positive feedback will destabilize that. It will make it more likely that somebody who gets an advantage will actually get a large advantage, and that could be a good thing if your game is kind of dull and it's always just kind of hovering at the same level. But there's a trade-off with that, because any amount of feedback that you're adding into your game, especially this kind of automated feedback, this is something that your rules are automatically generating, can actually be taking control away from players, and as a result of that might actually make the game less engaging. So even a game with like, all right, this game is too stable, we're going to add some positive feedback mechanisms so that people who have an advantage can capitalize on that advantage. But if that's not done right, then you've just got a situation where once someone starts winning everybody else loses interest because there's no way that they can catch up. So it destabilizes it in a point that, yeah, someone's going to win this game whereas previous it was always stuck at equilibrium. But it's very easy to lose people in that process because you're just creating and automated system to take control away from them. So that's one danger of putting in a feedback system. Also we talked about bringing things down to equilibrium to make things a little bit easier for people to catch up or to make advantages a little bit less drastic. That can also make our game much more, right? Because nobody gets an advantage, and depending on your win conditions, depending on your end of game conditions, could actually just make it interminably long. You could take a half an hour game and make it a one hour game. So on the flip side, the positive feedback game can actually help rein in the length of the game. So if your game is currently taking an hour long [INAUDIBLE] and your victory conditions happen to be on gaining an advantage over your opponents, then a positive feedback system can help speed that along. Right? OK, maybe we can cut it down by half by just giving some advantages the the person who's ahead or penalizing the people who are behind. That's an interesting little side effect, which is because positive feedback builds on itself, if you have an advantage early on, like you have early success, then positive feedback is going to be the mechanism that's going to help build on that. On the other hand, if you have a negative feedback system in your game, which means everything is kind of being brought down to an equilibrium, but everyone's getting closer to actually finishing the game, but no one really has a huge lead, then late successes mean a lot more. That last boost when you are near the final-- What's it called? AUDIENCE: The finish line. PROFESSOR: The finish line! That last boost before you hit the finish line on the very last round of Mario Kart means a lot because the whole game is designed with a huge amount of negative feedback, and could make all the difference. People have played Mario Kart where that was pretty much the game, there was this last fight [INAUDIBLE] Someone just got hit with a blue shell in the last five seconds and [INAUDIBLE] So negative feedback prioritizes what happens late in the game, whereas positive feedback prioritizes what happens earlier on in the game, the strategies that you pick, the cards that you have which you draw in play right at the beginning of the game [INAUDIBLE] Positive feedback stresses that. And finally, , always keep in mind that any interactional game systems probably create emergent feedback loops even if you had [INAUDIBLE] So what you want to do is you want to try to find that. There is a tool out there that I've seen some student teams use, and I don't know whether it's going to be useful for you, I'm just going to give you it. Has anyone heard of causal loop diagrams? It's from economics. You have? AUDIENCE: [INAUDIBLE] PROFESSOR: It's a very simple concept. You just basically write in your variables in your game, and you can add in a few in between variables if you know how players are thinking about your game. I've been using Mario Kart too much for an example. Let's pick another game with a obvious [? defect ?] in it that everyone in this room probably knows. AUDIENCE: Chess. PROFESSOR: Chess? Chess, does our piece-- OK, so, [? our ?] [? pieces ?] in chess sort of influence the number of attack [? patterns. ?] Which means the number of pieces will influence the number of pieces being threatened. Of your opponents pieces being threatened. And so it's just the number of opponent pieces taken. Now, if I flip this around a little bit, I think I could make an argument that if I switched it to the number of my own pieces, rather than my opponents pieces, then it connects. But not everything is a positive connection. Number of pieces that I have means if I have more pieces I have attack power, sort of. It's not exactly a linear connection, but [INAUDIBLE] chess set, right? I have more pieces than you, I have more ways to attack than you. If I have more ways to attack than you, then actually I have fewer pieces threatened because I get also more ways to defend. So I'm going to say, defend attack [? powers. ?] Anything that I can attack, I can prevent from-- I say, well, if you take this piece, then I'm just going to take the piece that you just moved, and with a one for one trade, I also have more pieces than you. And if you fewer of my pieces are threatened, that means I can-- let's see, if number of pieces that are threatened goes down, then the number of pieces that I can take goes up. AUDIENCE: Wait. [INTERPOSING VOICES] PROFESSOR: I think, yeah. AUDIENCE: [INAUDIBLE] be ignored PROFESSOR: Hm? AUDIENCE: If the number of your pieces threatened goes down then the number of your pieces taken goes down, as well. PROFESSOR: Yes. AUDIENCE: Those two are positively correlated. PROFESSOR: Yes, yes, that's right. You are right. And that should just decrease. And if the number of pieces taken goes down-- If the number of pieces taken goes up, then I have fewer pieces. If I have fewer pieces taken, then I have more pieces. So we have two classes and we have two minor [? suspicions. ?] Over all, this entire loop is called a reinforcing loop. Right? This is part of the feedback loop. You can connect this to other things. Maybe in chess I'm not terribly good at causal loop diagrams to begin with, but in chess you might be able to connect it to other things like positioning of pieces, or the value of pieces. So those might be reinforcing, and reinforcing is basically another way of saying it's a positive feedback loop. Another way that you can say is that it's a balancing loop, which means actually a negative feedback loop, and that's usually you put that with [? another ?] [? B. ?] [? Just remember ?] that thing used as well as students from economics and management who have actually done that because they're trying to study things like supply chains and how a system [INAUDIBLE] and industry. That's why they've already used the tool and then they can put it in. But, it might be worth looking up, it might be a useful diagnostic tool for your game if you're just trying to figure out what are all the different loops and just write down all of your variables and figure out whether things even up. Two negatives and two positives give you a reinforcing loop; positive, positive, positive, positive gives you a reinforcing loop; positive, positive, negative, and that's it, gives you a balancing loop. Basically, [INAUDIBLE] add up to positive, positive, [INAUDIBLE] negative. So, the one thing to keep in mind is that occasionally what happens is that you have something like a time delay, and that's notation for a time delay. And that means things sometimes won't end up oscillating because of your [? timeline. ?] So you get a better state in some things. We had a game, for instance, which was about climate change and you can invest money into research. Yeah, at some point of time in the far, far future that pays off, right? But that doesn't necessarily mean that it's going to pay off right away. So that may end up in situations where you can make money, big fluctuations around the [? store, ?] so balancing in terms of equilibrium. But just like a thermostat that's very slow to respond. You might have something like a heater that warms up a room and it makes the room too warm before the sensor actually realizes that it's warm enough and then shuts it off and then the room becomes really, really cold before the sensor realizes that it's too cold and then turns it back on again. And that can lead to these oscillations. So keep in mind also things like timelines when you're drawing out these diagrams. How long does it take this advantage to turn into that advantage, or this increase to turn into that decrease? Yeah, any questions? Yes. AUDIENCE: So isn't [? a change in ?] negative and positive feedback usually-- it seems really weird because usually positive means a multiplier greater than one and negative means a multiplier between zero and one, in the sense that, how much does your point [? mean ?] really near? If I had a one point lead right now, positive feedback means that it's really like a two point lead-- PROFESSOR: Hm. AUDIENCE: --because the points provide more for me later and, even though, [INAUDIBLE] negative loop that wins but it will often mean that it's actually more like a half point lead because it means less than it really does, whereas-- PROFESSOR: Yeah. AUDIENCE: --it means power differences definitely have a case where sometimes if it's negative then that can literally be negative in the sense that taking this lead right now could actually hurt me because in the game it could actually hurt me in the long run. PROFESSOR: Yeah, I think positive and negative signals originally inherited from math, cybernetics. It's actually referring to a differential, rather than the-- it's referring to the rate of change, rather than the actual multiplier. That's some baggage that we've taken on. It's funny because some people think negative feedback means bad, right? And it's like, no, negative feedback could mean good, and could mean that negative feedback for someone who's behind could be an advantage. So the terminology doesn't quite make it easy for everyone to use, but it's something that is currently in use, so it's good to know that game designers do use these terms. Hm? AUDIENCE: Talking about flight control systems and positive feedback. Bad. PROFESSOR: A control system where positive feedback would be bad. Yeah, I think that's kind of the opposite of the word control. [LAUGHTER] Chain reaction might be a great way to describe that, for instance. Any other questions about these ideas? A lot of the stuff that I've been talking about is known as first order cybernetics, which is basically the system is doing its own thing happily, and if I'm taking a look at the system from the outside I don't really have any influence on how it's going to perform. But, there is a second order, a whole school of second order cybernetics that I'm totally unfamiliar with, which actually takes into account having a person in the loop, which you would think would be a lot more applicable for games, especially games being played by people. But I don't know much about second order cybernetics. So, if you're interested in a research project or something like that, a thesis or something like that, come and talk to me. I would love to [INAUDIBLE] because I don't know about that just yet. All right. One final word, this is a very formal way of looking at game systems. Just as I have [INAUDIBLE] I just admitted that it doesn't take people's involvement in the process very well, it is not necessarily the case that the formal ways of looking at a system, just looking at how the rules interact, is necessarily always the best way to look at games. In many ways, in many occasions, it is actually not a useful technique. I can have [? all these systems ?] in my rules, but if people are going to read my rules and interpret my rules, as you've already seen happen in this class, differently, then they may be motivated to do things that operate against my assumptions because that's something else in my narrative, my aesthetic, in their own individual motivations. If somebody wants to take down somebody else in a game just because they happen to hate that person and it's outside of the game rules then nothing in the rules are going to tell me that. But it's still going to affect how that game gets played, and I might want to take that into account. The Game of Diplomacy, for instance, has to take into account the fact that you're probably playing it with people that you know, and you have some sort of existing relationship with them. I don't think it does a very, very good job of insulating you from the fallout of the game. [LAUGHTER] Everyone knows what I'm talking about, right? OK. Just unfriend the people that you play diplomacy with anymore. That's why I don't like playing that game, but I like talking about it. So, we'll be going into things like games for social play, and the social function of games, how you interact with people, right? Some other topics are going to be things like games of simulation, games that have a slightly flawed mirror to the real world, but an interesting one. It could be a fun house mirror which could be fun. So, we'll be looking at that in weeks ahead. This is probably about as formal as we get. We've been talking about things like information systems, that's also very formal. Just keep in mind that's just one school of game design and writing about game design. OK. So we have games that all have an interesting way of dealing with this. Some of them, you're going to see these problems come up, like especially in [? Imperiums. ?] How many people do we have in class today? One, two, three, four, five, six, seven, eight, nine, 10, 11. OK. So, 11, that's like one four-person game-- [INTERPOSING VOICES] PROFESSOR: OK. AUDIENCE: [INAUDIBLE] PROFESSOR: So we'll get about three games going on at once. Some games like King of Tokyo is very much a king of the hill game, so it's deliberately trying to ask you to take down the person who's in the lead, and the game mechanics make wonder whether being in the lead is really all that great. So, it's playing around with [INAUDIBLE] Small World is occupying territory with your army. Yeah? AUDIENCE: [INAUDIBLE] when he talked about Vinci, Small World is a update of Vinci. PROFESSOR: Yeah, though faster to play, which fits in this class. We have Vinci too if anybody wants to take a look at that. We have that at home but we don't play that often because it takes like two hours. Unless, of course, you already know it. Lifeboats is just about a bunch of sailors trying to get to safety. Basically, their boats are sinking and they're all trying to get to safety and everyone's jumping on and pushing people out of lifeboats, and swimming to lifeboats, and stuff like that. So it's all about knocking people out. Hoity Toity and-- AUDIENCE: El Grande and [INAUDIBLE] PROFESSOR: Yeah, I actually know relatively little about these ones. I forget-- AUDIENCE: Bluffing. PROFESSOR: Yeah. Hoity Toity's a game about bluffing? OK. These were recommended by somebody's [? father ?] when I shared with them the [? syllabus. ?] And El Grande is a Spanish game about being territorial? AUDIENCE: Yeah. PROFESSOR: No, I totally [? made this up. ?] AUDIENCE: Do you have English rules in here? PROFESSOR: Yes, the English rules are in there. The box is just [INAUDIBLE] but I'm pretty sure I looked in there earlier and I saw British or English rules. Intrigue is the game that I like the thing the most. It's basically, you're running a university and you're trying to put your foot down with another university. That's not what the game says in the box, but I would like you to try playing the game with that theme in mind. [LAUGHTER] So, yeah, it's interesting game because you are doing exactly the same thing that everybody else is trying to do, but in order to succeed in the game you have to put your troops, your colors, into other people's territory, and they're trying to put into yours. And there's some mutual benefit [INAUDIBLE] So a lot of opportunity for you-- It's a pretty good negative feedback game because it gives you a whole bunch of tools to basically take [INAUDIBLE] take the lead. Take a look at which games hide the progress of players from you. Some of these games, you can easily see who's ahead, and some of these games it's really hard to tell, but that's deliberate so that you don't have these situations where someone all gang up on the people who are in the lead. Look at how they're avoiding things like [? turtling ?] and how they discourage [? stuff like ?] sandbagging, or maybe sometimes encourage them and use them in favor of them. Like Intrigue definitely encourages sandbagging. All right. And then we'll have these boxes out at three o'clock, working on teams, [INAUDIBLE] I think we're going to see your teams before spring break so you might want to give each other tasks to work on during spring break. If you can't think of anything, test over spring break. A good number of you are going to see people who haven't seen your game yet. And-- AUDIENCE: So that you can have written rules, a first draft of your written rules. It's a great opportunity to take the test then. PROFESSOR: Yeah. And if you don't have a first draft of the written rules then this last hour of class is probably a good time to bring out the laptop and just start editing that text so that you can go into spring break with something that you can test with. All right? AUDIENCE: And for the game King of Tokyo the people will probably be able to play twice, maybe you can play another game. But yet again they're all about 40 minutes. PROFESSOR: Yeah. AUDIENCE: Is El Grande longer? AUDIENCE: Yeah, El Grande is probably 90 minutes or so. AUDIENCE: Small World is 40 to 80. [INAUDIBLE]
|
MIT_CMS608_Game_Design_Spring_2014
|
22_Changing_Rules_I.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, thanks for showing up on time. I really appreciate it. It's kind of weird at this. And I never see this front row completely empty, but this will be [INAUDIBLE]. Thank you. That really makes me feel better. So, today's structure of class is going to be very similar to what we did last time, which was we're going to talk a little bit about a reading, play a bunch of games. [INAUDIBLE] the reading [INAUDIBLE] end of class probably with more time for you to work through your prototype. Just as a reminder, Rick and I are here to do any play testing that you might want. And we'll give you feedback. Along with the feedback, we also put all the grades up. Thanks to Rick for getting them all down for assignment two. And all the-- if you look at the comments that we've sent, there are two sections. One is a section that is just about your individual write-up which only you see. And then there is a chunk that is the comments will be geared to every single one of the team members, which was basically based on your product itself. Do take those in mind. Do keep in mind because there's a lot of stuff there that-- a lot of feedback that we're giving you in the hopes that you will be able to put that feedback to good use for assignment three. And Did you have a general comment? RICK: Yeah, in particular, rules, rules, rules. Proofread your rules. Test your rules. The presentation and expression of your rules is just as important, if not more important, than the design of your game. That's why we have you play games you haven't played before and read the rules you haven't seen before to see how bad the professionals get it done. Try to be better than that. The main things we're looking for-- coherence, clarity, organization, figures, and illustrations. So, for assignment three, please try to put in illustrated-- at least, bare minimum, illustrate what it looks like when your game is set up. Illustrate what it looks like when a major state change happens in the game. Give us a basic section that just says here's kind of goal, how you play the game. Here's how you actually play the game. Here's some examples for how various things within the game happen. There is general guidelines on the [INAUDIBLE] materials [INAUDIBLE] readings we've given you. Really pay attention to that for assignment three. Basically, in each assignment we get a little bit more tougher when we're grading the rules section. PROFESSOR: Because you've have two rounds of feedback already on rules. And hopefully, the third time around, you know what-- you get into the idea of what we're looking for. Also, use illustrations in your rules to get your point across. You know, that's something that we typically find under-used where you spend a lot of words trying to explain some things which [INAUDIBLE] is far easier to understand where [INAUDIBLE] a simple, [? minor ?] drawing. Even just like a cell phone camera, like a sketch, would probably have been more informative than paragraphs and paragraphs. RICK: Actually, I have a [INAUDIBLE] working for me on a board game. And she did something amazing what I've actually never thought about doing for rules. She made it basically in PowerPoint. It's really, really long, a lot of pages. So I'm really not sure how well it's going to work that way when we're going to be testing with it. But it's very illustrative and very diagrammy-- lots of big words, very little written out. Very little written out, like , long paragraphs of language. I'm going to get the latest version of that and share it with the class so you can kind of see what that looks like, if that's useful for your game or not. PROFESSOR: I personally have had a lot of luck with Google Docs built-in drawing tools. So you, for the word processing doc, just bring up the drawing [? in it. ?] And just, like, drawing things like cards that overlap each other with text in them-- like, Google Docs has all the tools you need for that. Drawing arrows that arc in curves with one arrowhead and two arrowheads and everything that you need there is already [INAUDIBLE]. So if you don't have a drawing program, that's a lot built into Microsoft Word and in Google Docs that you can take advantage of. They don't have to look great, but they could often get across the point a lot easier than text. So, do try to use those. STUDENT: The next playtest is March 7. Is that right? PROFESSOR: I believe that's right. I don't-- STUDENT: May 7. [INAUDIBLE] PROFESSOR: May. STUDENT: [INAUDIBLE] RICK: So, I highly, highly recommend bringing a draft of your rules in and having that as part of the playtest. You'll get feedback from [? Phil and myself. ?] You'll also get feedback from our guest lecturers. PROFESSOR: Every time somebody asks you for rule verification in class or when you're playtesting with your dorm mate or whatever, that's a good opportunity for you to think of how you could reword your rules or reformat your rules. That is clearly a situation where this could be a question that's going to come up again when we play the game. Or it could be a situation like they just didn't understand the rules as written right now. So you've got to rephrase it somehow. Use the bullet points, examples, and make sure they're visually distinct when someone is [INAUDIBLE]. RICK: Game designer I follow on Twitter is a very, very well spoken [INAUDIBLE] just posted. There's a difference between a broken rule and a broken rules presentation. The problem could be in either [INAUDIBLE] both those places. Without testing, you're just not going to find it. PROFESSOR: If the only way that people are playing your game is that you are explaining it to them, then you're not getting the feedback that you need on your rules. So, make sure that for assignment three, your rules have been tested too. The rulesheet has been tested as well as [INAUDIBLE]. Also, one tip for people who are doing personal write ups-- people are getting better at sort of identifying things that went well, things that didn't go so well in the project. And that's good, in general if you did a decent write up, you probably got a extra half grade on top of your grade. So if you got like a B minus-- if you got a B plus, your write up alone could push you to an A minus, for instance. But what some people don't [? necessarily ?] do is just say next time, I'm going to do this, or next time, I'm going to be sure not to do this. Just like, little simple little things like, what are you going to take all of this for future projects? So right now, [? we are seeing ?] a good analysis of your previous project, but usually write ups will also [INAUDIBLE] what the take-away is. This is what I've wanted to make sure I'm going to try to do. Or maybe you're not sure. But this is something that you're going to try to do. The next time you make a game, this is what we're going to try and do. That can help a lot. RICK: First time is free. If it's the last game you'll ever make, then what are you going to [INAUDIBLE] for a project? What are you going to do on another team project, another design project? Think about the future. PROFESSOR: It doesn't have to be a design thing. It can be, maybe next time, if I work on a team on a creative project, I am going to make sure that I communicate with them this way or make sure that you understand blah. Just something like that will be-- some sort of lesson that you're taking out from the previous assignment [INAUDIBLE]. Any questions about the assignment, assignment two? Grades? By the way, did you get an email from something from [? Stella ?] that tells you grades have been posted? STUDENT: No. RICK: Because I think with other classes, [? they post ?] a more grades like p-sets and other stuff. So it would be kind of annoying. Like, four classes posting a p-set every week, or once a week. PROFESSOR: That makes sense. OK, well, OK, then. It's a good thing that we haven't [INAUDIBLE]. Today's reading is mostly about [INAUDIBLE]. It's actually a bit of a post-mortem. It's written in the sort of same style as the flavor text of [INAUDIBLE]. And I thought it might help to actually open up the box, give you an idea of what it looks inside. It might [INAUDIBLE] write up for-- actually, [INAUDIBLE] RICK: I've seen this. I've watched people play this. This is the latest edition. You might have seen the [INAUDIBLE] edition. PROFESSOR: And [INAUDIBLE] before that. [INAUDIBLE] bigger than [INAUDIBLE]. RICK: There's the special power that [INAUDIBLE]. PROFESSOR: Ah. [INAUDIBLE] So you get a bunch of planets, different colors, and I believe the colors correspond to different players. [INAUDIBLE] Got your tokens, some of which are used, some of which are not used depending on the assortment of character that you've got in your game. [INAUDIBLE] particular [INAUDIBLE]. Or you may not use that either. This is the counter. You can see you can put [INAUDIBLE] a race. [INAUDIBLE] STUDENT: [INAUDIBLE] PROFESSOR: Yeah, so you can track them. And [INAUDIBLE] how each player keeps track of [INAUDIBLE]. So, here's the yellow player. Here's the green player. And say both of them are at three. So [INAUDIBLE] that. [INAUDIBLE] not able to [INAUDIBLE] but it's just a way to keep track of [INAUDIBLE] for any one [INAUDIBLE]. And a whole tight pile of cards of two different sizes. It's a really bad [INAUDIBLE] Some of these cards, I'm not entirely sure what they do. But the cards that I do know what they do are the cards [INAUDIBLE], [? Oh my god, ?] [? it's a mess. ?] [INAUDIBLE] separate [INAUDIBLE] cards. [INAUDIBLE] cards which have these numbers. And what are you basically doing is you're using these cards [INAUDIBLE] identify who is going to initiate [INAUDIBLE]. Here we go. Yeah, so you add-- so these numbers, depending on how many cards that are played, basically give your ships some sort of attack powers. But then you also have individual ships that [INAUDIBLE] move on to attack [INAUDIBLE] attacking a red planet. And what I'll do is I'll point this arrow toward the red planet. Anybody defending the red planet puts chips on the red planet. Anybody who's attacking it puts it on this [INAUDIBLE] very nice point [INAUDIBLE] are attacking. That whole game basically largely comes down to you trying to encourage other people to contribute either to your defense [? or the ?] attack. Please give me some chips. Please give [INAUDIBLE] and [INAUDIBLE] negotiate [INAUDIBLE] mutual benefit [INAUDIBLE]. But this is the really interesting part of the game, which is when you start a game, you get one of these cards. And this tells you what race you are. And each race basically has a power that lets them break the rules somewhere in the rules. Only you have this ability to break this power. So for instance, this is the [? hate ?] race. And they have the power of rage. So at the start of a turn, you use this power to force every player to either discard a card [? on the ?] ship [INAUDIBLE] discard [INAUDIBLE]. Everybody else must then discard at the same time. So if I put [? down a ?] [? card, ?] [INAUDIBLE] an attack card, everybody has to discard an attack card. All these [INAUDIBLE] ships [INAUDIBLE] they make people angry with this power. The filch is where you steal cards from other people, the power to hack. You can compensation from other players, basically. And so you've got this kind of like weird almost like CCG, Collectible Card Game, kind of a [INAUDIBLE] where depending on what you start with [? is one way to ?] determine the shape of the rest of the game board. And every time you play the game, you can have a different set of cards because each player gets a different card between-- it's very, very likely that you have to play two games with even the same [? set of basis ?] at the table. [INAUDIBLE] And the whole idea of the games we picked out today is that all of them are kind of like a sort of an interesting case of what it means to have changing rules in games. Risk Legacy-- how many of you played this particular? OK, how long have you had a game going [INAUDIBLE] how many rounds? STUDENT: I played with people in my home. I played in like three games and I think we played a total of 12 or so-- 12 games. PROFESSOR: Every time you play this game, the game changes. STUDENT: [INAUDIBLE] PROFESSOR: What? STUDENT: I've heard of this. PROFESSOR: Yeah, [INAUDIBLE] changes made [INAUDIBLE]. When you first buy the game, four packets, four envelopes are actually [INAUDIBLE] the past [? year, ?] You can see we played this a bit [INAUDIBLE]. And that gives you new cards, stickers that you can stick on the old cards. You can see this card had a whole bunch of stickers added onto it that basically introduce new rules into games. You can change the map. And usually, this comes down to the people who won, the people who lost get certain abilities to make changes [INAUDIBLE] areas [INAUDIBLE] these [INAUDIBLE]. Certain cities have been established. So for instance, this is the kingdom of [INAUDIBLE]. This is the kingdom of [INAUDIBLE]. STUDENT: That's mine, by the way. PROFESSOR: That yours? STUDENT: Yeah. PROFESSOR: OK. STUDENT: That's mine. PROFESSOR: [INAUDIBLE] cover here. [INAUDIBLE] Here [INAUDIBLE]. STUDENT: Yeah. PROFESSOR: [INAUDIBLE] in Japan. So this was interesting because one way to think about it is it's not the [INAUDIBLE] played the game of Risk? RICK: Yeah, it starts off like Risk but closer to like Risk 20-something or other. PROFESSOR: Yeah. RICK: One of the later editions of risk where there's like turn order cards and some other-- it does resources differently than classic Risk did. But basically, it is Risk. PROFESSOR: Yes, and you have stickers that you can attach on the [INAUDIBLE]. And so basically, every time you play this game, you're playing a different game [INAUDIBLE]. RICK: The great thing is, in this one, there's a story that goes along. As you do things, the story changes. So you're in this post-apocalyptic hellscape. With these factions that are fighting each other. And then all of a sudden, when a certain condition is reached, one of those classes opens up and new elements to the story are discussed. And then a new race-- a new faction comes up or a new race comes up. So, spoilers-- there is aliens right there from alien island. PROFESSOR: [INAUDIBLE] RICK: [INAUDIBLE] becomes the alien collaborators. And that faction from now on is considered the collaborator. It gets special powers because of the aliens but they don't get-- I think they get special powers and I think they get a special something that goes against them. PROFESSOR: This game is actually probably not best played cold. But so what I will suggest is that folks take a look at this box. Because just trying to internalize all of the changes that have happened to this game since [INAUDIBLE] probably [INAUDIBLE]. But it's very, very [? unique way to ?] take a look and see what changes have been made. One way that a designer is going to [INAUDIBLE]. A lot of the rules that you get, a lot of changes you can make later in the game are actually tools for you to fix [? the events. ?] So there are really, really powerful changes that you can make in the rules. But the whole idea is that after you played something like 15 rounds or 10 rounds of the game, you kind of know where the game is going, where the board is [INAUDIBLE] or maybe [INAUDIBLE] how [INAUDIBLE]. And the players now have the tools to officially fix those imbalances to play. So in a way, it's [INAUDIBLE] game design process. And [INAUDIBLE] you have [INAUDIBLE] is kind of like a deck building game only your dice and [INAUDIBLE] part of [INAUDIBLE]. And I don't have a whole lot of information about this game. STUDENT: Plays like Dominion. PROFESSOR: Yes. STUDENT: [INAUDIBLE] the cards are out. [? We ?] build your deck just like it's Dominion. And when you do attacks and certain things, you have cards that represent a thing that you can use and then dice that are reflecting the [INAUDIBLE] card. PROFESSOR: So, like a lot of other deck building games, you know, [INAUDIBLE] conquer. Often, when you're actually going into a game, [? what you're seeing ?] is not quite what [INAUDIBLE] before in previous games. So every time, you're trying to, like, figure out what is [INAUDIBLE] momentum. What is the strategy for this particular round that I'm playing? [INAUDIBLE] just happens to be an interesting twist on that [INAUDIBLE] cards. And Settlers of Catan, it seems like an odd thing to bring up. But back when it first came out, it was kind of neat that it was one of the games that definitely popularized the whole idea that every time you played this game, you had [INAUDIBLE] because you have the hexes that are arranged [? by the water ?] and systems to be able to make a random map, [? into a ?] table no matter how it's arranged. And you're playing a different scenario in the game. So we also had the Seafarers expansion. That's kind of like the traditional way that publishers [INAUDIBLE], right? They give you all the parts and maybe a new set of rules to be able to add to an existing set of pieces on the board. What happens is that the map is a lot larger, as you can see, because you have additional cards. Actually, I don't know anything [INAUDIBLE]. But [INAUDIBLE] larger. Users [INAUDIBLE] then the old ones. Gives you a few new pieces, new rules, and you're playing [INAUDIBLE]. So this one might also be-- and I [? wouldn't ?] suggest just playing a straight out game of Settlers of Catan because I know a lot of you already have. But in case you need it, [INAUDIBLE] especially if you haven't played the Seafarers expansion [INAUDIBLE] look through how they present the rules so that knowing how to play this game, how do they introduce new sets of rules to change the way [INAUDIBLE] played. RICK: Big difference for that one with islands, rather than having rows of settlements, you have shipping lanes that you build that connect ports. PROFESSOR: Yep, and I believe you don't know what's on the island before you get there. RICK: Yep. PROFESSOR: So you don't actually know [INAUDIBLE] island at the beginning of the game. From the reading that-- anything strike anyone from the reading? Just wondering if-- STUDENT: It said that I guess the unbalance of the game was the actual point. PROFESSOR: Yeah. STUDENT: And that they were going for the feel of that if you find that something is not fair, that you have your own unfair power to sort of counter that unfairness. PROFESSOR: Right, everyone has a win cheat basically, but it's officially mandated by the game and the different ways you [INAUDIBLE] STUDENT: It's kind of similar to Diplomacy, which is inherently unbalanced. Like, [INAUDIBLE] played several games of Diplomacy, [INAUDIBLE] like some of the-- I don't remember which countries, but some of them are, like, in much stronger positions and two of them together can take out the entire board if people let them. And so in general, more advanced and experienced players will not let that happen or will all gang up and crush the first one or something like that, which kind of makes an interesting meta game. PROFESSOR: So you can spot the patterns. But then you also have a lot of tools to try to push back against that once you know that the game [INAUDIBLE]. STUDENT: I consider Cosmic Encounter to be sort of like the version of Munchkin in that they're both sort of stop the leader games where it's literally everyone-- it's very easy for almost everyone to get [INAUDIBLE] to victory and it's all about, like, how you [INAUDIBLE] people are [INAUDIBLE] everyone's trying to [? stop her. ?] And when [INAUDIBLE] people run out of stuff to stop the winner. PROFESSOR: Yep, every game comes to an end. I remember games of Cosmic Encounter dragging on for a really long amount of time, but I'm not entirely sure that's the case of this particular edition of the game. STUDENT: The idea of finding balance for [INAUDIBLE] where sort of the motto is if it's broken, break it until it's fixed. PROFESSOR: I haven't heard that one. STUDENT: [INAUDIBLE] basically, they took Super Smash Bros Brawl and just way overpowered a bunch of different people's moves so that you can do ridiculous things the entire time. It actually ended up working pretty well [INAUDIBLE]. PROFESSOR: OK, yeah. League of Legends, the ultra-rapid fire is kind of that too. But I think that's-- one of the things that I remember from the readings was if you have a game where it's fun to lose, you've got something really, really good there. They were a lot more flippant about some of the other ways that you can tell you have a good game. Like, you lose the game when you have a divorce because your spouse wanted the game. But I'm not sure how that metric is useful. But some other metrics that tell whether your game is-- you've got something really interesting as a game is that one is the game system is actually continually surprising the people who actually designed the game. Then just every once in a while, you just see something you've never seen before and you're the ones who made it. You know you've got a really interesting system there. And if people have stories to tell about their game after they're done, I think Munchkin has that. Cosmic Encounter definitely has that. They do try to give you the tone through the writing. Both games, Munchkin and Cosmic Encounter, these are very [INAUDIBLE] games where that's a lot of verbal humor in the text. But I feel in the dynamic of Cosmic Encounter, much like [INAUDIBLE] earlier, that's also [? create an origin ?] story. I was like, this person betrayed that person. I guess Diplomacy does that too. But you don't want to tell stories about Diplomacy because there's so many bad things associated with that whereas what happens in Cosmic Encounter is kind of hilarious usually. So-- STUDENT: [INAUDIBLE] PROFESSOR: Yeah, and Munchkin does that too. So you can tell stories about that and say, oh, I remember the time when, you know, I was this close to winning and then all of you [? stopped me. ?] And then I get this power and then I managed to pull myself out. And then this person just came from nowhere and won the whole game. STUDENT: I used to be competitive about Munchkin and I played like two or three games. And I was like, this game's a joke. Like-- PROFESSOR: It's one long running joke, yeah. STUDENT: So there's a role or a class, something like [? the Beef, ?] where you can roll a die-- actually, even better is the one where you just take people's shit all day. Like, I had a friend who like literally, every time, he was just like rolled-- like, keep going downhill until he was level one and just keep taking people's shit and it's just like, OK, man. You clearly don't want to win. All you're doing is trolling me right now. PROFESSOR: Right, right. It's supposed to be a situation of what a role playing group actually does when they're getting together, not how a role playing game actually works. OK, so I think [INAUDIBLE] with playing these games now, I'm going to guess about an hour. Again, I would not suggest this with Risk Legacy for playing, but you are definitely welcome to take a look at it. The rest of the games, go right ahead. And probably, [INAUDIBLE] hopefully, the [INAUDIBLE] at least one hour, possibly 1 and 1/2 hours for you to work in your teams for the second half of class. Cool? Yeah. STUDENT: Are we [INAUDIBLE] then? PROFESSOR: Yes, [INAUDIBLE] on Monday. Sorry, did I say Wednesday? STUDENT: I don't remember. PROFESSOR: OK, all right. Yeah, Monday-- Monday is the day where we go outside and get ready to play Joust, which means wear comfortable shoes and-- STUDENT: It's [INAUDIBLE]. PROFESSOR: And hopefully it'll get warmer. [INAUDIBLE] because outdoors is usually too noisy [INAUDIBLE]. All right, so [INAUDIBLE]
|
MIT_CMS608_Game_Design_Spring_2014
|
16_The_Simulation_Gap_Assignment_3_Pitches.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, so the plan today is that we're going to talk a little bit about the reading, which should feed directly into your thinking for the final assignment in this class. We're going to play a couple of the games. There should be just enough games for everyone to get through one round. And the rest of the class today will be used to form teams, and then talk in your teams and try to figure out what project you're doing. Actually, the other way around. We're going to talk about projects, and then you're going to form teams around them. So today's reading was-- in the title Simulation 101. Right, literally 101. It was very, very introductory. Here's a bunch of terminology and a bunch of concepts about simulation. Has anyone read that before, any other class? Because I know it comes up in some other classes. OK, so we have some sort of source reality. Which I am going to put question marks because we live in a postmodern world. And some sort of source reality where we are trying to create some sort of model. And then we've got something that we thing of as a simplified abstracted version, which ought [INAUDIBLE] simulation. Anyone remember what are some of the things that simulations do that-- well, actually, what are the alternatives to a simulation that the article was talking about? How do we typically talk to people about [INAUDIBLE]? How do we usually convey an idea of this is how reality seems your work? What are some other ways That you do that? AUDIENCE: You extract it. PROFESSOR: You can extract it. Extractions are useful words. I'm going to write it here. That's-- you can do that to a bunch of different moves. When Jane Austen tells you how life was like in-- when was that? 18th century? AUDIENCE: 19th century. PROFESSOR: OK. How was she doing it? AUDIENCE: Metaphor. PROFESSOR: She tells a story. She uses the process of narrative. She does use some metaphors, but for the most part it's just very plain. This is what happened to these people, and everyone is upset. That was what the reading today was. All right, you have this way of telling people about how something is. One way that you do it is you could tell a story about it. You could do a painting, a visual version of it. You could create a little physical sculpture of it, if the concept is-- if that is a good way to convey a concept. You can create a simulation of it. He lumps together things that represent how the world works as a representation. It's like a picture of a pipe is a representation of a pipe. It's not the actual pipe. But it is a visual representation. And the narrative will be a sequence of events, a linear time representation of what happened. But it doesn't tell you anything about how that version of reality works. It doesn't-- it may give you a clue that you can intuit. But It's not actually trying to show you how this thing behaves. And what we trying to do in simulation, in the broad sense you can think of simulation as a kind of a representation. But very specifically, a simulation is trying to tell you about how the system actually behaves. So, it is looking at some sort of reality. What are the kinds of things that you could be looking at? Before you actually try to make some working simulation about how reality works, what are some of the things that you want to look at? Let's be a bit more concrete about this. Your reality is student debt. OK, let's just say that is the system that we are looking at. We are looking at the world of student debt. And you are trying to create a model. If you wanted to explain to somebody else, who doesn't really understand student debt, that these are all the things you've got to worry about. What are the things that you would try to-- that you might consider explaining to them? A high school student is thinking-- let's give this [INAUDIBLE]. The high school student who doesn't understand student debt but is thinking of [INAUDIBLE] college. What would you tell them? What are some things you want to tell them? AUDIENCE: The situation [INAUDIBLE] under which student debt accrues. What things [INAUDIBLE] PROFESSOR: Break it up a little bit more. What are some of the components of that? AUDIENCE: Situations where you would take out loans. PROFESSOR: OK, situations. Maybe cases, you know? These are examples of things that they [INAUDIBLE]. OK, what else? Eventually they're going to make games about this sort of thing. So what you want to know [INAUDIBLE]. AUDIENCE: You want to tell them how [INAUDIBLE], and the how to pay it off eventually. PROFESSOR: OK, those are mechanics right? Those are verbs. How do I do that? AUDIENCE: How to avoid having to take out loans. Getting scholarships and things instead. Trying to work around it. PROFESSOR: I would call those strategies. These-- you have a range of different strategies, all of which to achieve the same goal. But some of them are better than others, and some of them completely avoid. What else? AUDIENCE: I'd say that any sort of [INAUDIBLE] that usually happens. [INAUDIBLE] PROFESSOR: So probability of how often things happen. OK. AUDIENCE: Also trends. Student debt is increasing. [INAUDIBLE] A big picture of what it's looking like. PROFESSOR: OK, so that's coming from a more historical point of view, right? There were also two things that [INAUDIBLE]. Things to compare yourself against. AUDIENCE: The effects that having debt will have on the person or situation. PROFESSOR: Consequences? AUDIENCE: Yeah. PROFESSOR: One of the things I would include would be things like variables. What are the numbers that you want to keep in your head? Easy monitoring that will tell you how the system works. The situation, cases, verbs, strategies, that's probably something-- I don't have a good word for it-- to explain how one thing affects something else. But that is like consequences. Maybe flows. Where does the money go? What's the rate of money coming in from getting student a job? Was it getting a [INAUDIBLE]? Was it some other source of revenue? [INAUDIBLE] and loads, that sort of thing. And how does that affect how much money you have versus how much money you owe. So you have got all these things that come into the model. This other reality, you create some sort of model that you understand. This is how you understand the world as it is. In the team project, this is how your team understands the world as it operates. And then you decide that you are going to make a situation out of it. And you can choose what are the things that you want to bring over. I'm not to go through that part of the exercise because they everyone will have a different answer. But for some people certain variables are going to be more important than other variables, depending on the kind of simulation you make. if want to play this game-- actually, I've got to add a rule system. That's really what our assignment is about. I you want to play this game as someone who is giving [INAUDIBLE], then the worlds, and the variables, and the cases that you are thinking up are going to be different. They are going to be thinking about [INAUDIBLE] Whereas, if you are playing this same as a student, then you-- the simulation [INAUDIBLE] what it will look from the student. Then what these sort [INAUDIBLE] won't matter that much to you. You'll just care about yourself. So the variables you care about will be different. How those things affect your life are going to be different. Now, there is a [INAUDIBLE] to abstraction. And correct me if I'm wrong, but that's where [INAUDIBLE] put simulation the gap? AUDIENCE: Yes PROFESSOR: Yes . That's this thing where not only do you decide that there's a bunch of things in here that you are going to choose not to bring over to you simulation, or choose to bring over to your simulation. Then you can think of a way to simplify that. Maybe we're not going to-- maybe certain kinds of consequences from interest from loans or whatever normally increase on some sort of non-linear scale. But for the sake of making a simpler simulation we're going to assume that it is linear. We are going to say that every time-- every turn the amount of debt that you have increases by a fixed amount, rather than a percentage or compounding interest. Because simplifies things, but maybe you still get certain points across. This thing down here, some people call a simulation gap. Other people sometimes place it over here. But I'm going to make the case that has to happen. No matter what you do, you have to do this because you're never going to be able to fully simulate everything that you know, and the fact you don't know everything about reality anyway. So this has to happen. But there is a design choice that you make over here of what you want to simulate and to what degree you want to simulate it. So, for instance, one of the games that we have today is Kolejka, which is, again, standing in queues, waiting for rationed goods to come in while living in 1980's communist Poland. You have no idea when the next truck of goods is going to show up. The reality Is that there is-- there are schedules. Even in communist Poland, people actually-- there was a schedule in which objects were supposed to show up in certain stores. I believe that there's actually an action that hints at that, but all it does is that it allows you to place an object in multiple stores at will. And it's a special thing that you [INAUDIBLE]. But for the sake of the abstracted version of the game, which they made, they are going to say, when you-- as someone who was living at that time, yes, there were schedules. It was possible to actually figure out when different goods were going to show to some reasonable level of accuracy. But the majority of people who were involved in this system had no idea. And so for the most part, it was random. That the decision that they're making, and the point they're trying to get across. Now whether that's true or not, that's something that the designers want to convey through design of the game. That's something that you can also decide. If there is that you want make across, a lot of the rhetorical points are going to be right here. So if you want to say-- we had a game last year which was about people cross-dressing to get into the French theater. Not all of them were cross-dressing, some of them were dressing up, some of them were dressing down for various reasons of social class and trying to get a cheaper ticket and being able to get into parts of the theater that you otherwise couldn't. It wasn't explicitly a simulation, but they were trying to get across a point. That this is a sort of thing that that you can do. Everybody had a-- if you wanted to get into the theater, there were multiple ways you could get into the theater and there were different reasons for every single person. So a noblewoman might cross-dress as a man in order to be able to get a cheaper ticket, and go slumming with the people buying the cheap tickets because it is a lot more exciting down there. Somebody who is in the clergy will have to cross-dress to be able to buy a ticket as a woman because the clergy are not supposed to be going to the theater. So they are hiding themselves for other reasons, even though they are in a more visible position. And there is a lot of bartering, there is a lot of theory of [INAUDIBLE] and clothes and such. It is a pretty fun game. I'm going to grab that and bring it over. So when we play the games today, I want you to think a little bit about the situation they are trying to respond. The Gentlemen of the South Sandwiche Islands is a game about crossing bridges. If you have ever done any classes here at MIT about bridge-crossing theories, that game is all about that. But it is ostensibly a game about match-making, or finding a match. And you are sort of chasing, where the men are chasing the women who are also being chased by their-- AUDIENCE: Shepherds. PROFESSOR: Shepherds, yes. Really good. So that's a very, very, true-- that giving the whole, actual, dialogue, where the gentleman and the ladies there. It is just making it this whole traveling game. And you could ask yourself, what are they trying to get across? Whether intentional or not, it doesn't really matter. But the game sends a message about how the system is supposed to work that you can take away as somebody who has played the game. [INAUDIBLE], which is a game about a bubble economy. And Crunch, which is also about a bubble economy, only a lot more recent. And it is Crunch, in particular, where this [INAUDIBLE] is. It is trying to be a satirical commentary about how big banks need to be operating, whether or not that's actually how they operate is kind of beside the point. But they do have certain elements of simulation in there and then they just abstract them to make you focus on the things that they want you to focus on. So, let's see. Two of those games are two-player games, so that is only going to account for four people in this classroom. The other two go up to five players, so we can probably split everybody else into groups of about four. 12 people, so eight. That should make the games go at a pretty good clip. Crunch and The Gentlemen of the South Sandwiche Islands are pretty fast compared to the other two games, so the people who are playing those games might want to switch games and play [INAUDIBLE] after that. It will probably be a lot faster if the people who just competed a round then turns around and explains it to the next group because then they won't have to read the rules. So let's do that for about an hour. Especially because Kolejka takes about an hour. And after that, we'll talk about class. We'll talk about the assignment. So, any questions about this? No? Make sure you talk with your teams, especially when you're looking at how the rules work, but also while you're playing the game. I talked to both the team that played Tulipmania-- not the team, but the group that played Tulipmania and the group that played Kolejka. I haven't had a chance to talk to the folks who played Crunch and The Gentlemen of the South Sandwiche Islands yet. So let's talk a little bit about-- AUDIENCE: There are cards on my body that I would like to get rid of if we're done. STUDENT: I don't want to call it. We're still in. PROFESSOR: I'm going to say the game is on pause. But I do want to talk a little bit about, not so much how this one particular game is being played out, but rather what does this game render in very high detail? And what does it sort of gloss over a very low-level granularity? So what is very, very well rendered in this game? In the opinion of the people who played this game? AUDIENCE: I would say corruption. I think there's a lot of-- for example, when you are sneaking a card, you kind of wonder if someone is watching you. I think that's actually pretty similar in the real world, dealing with a corrupt CEO. You wonder if someone is going back and checking your records, if you can actually get away with taking this money and storing it somewhere. Right. STUDENT: I wouldn't quite agree, because in this game they can see the number of cards in your hand going down. And as long as they don't see you actually putting them away, they are like, you are clearly embezzling, I can't do anything about it. PROFESSOR: So there is on one hand the not wanting to get caught in the act of embezzling. On the other hand, the very clear assumption that everybody is. You can see it, it is happening. You know that it is going on, that everybody in the game is engaging in this, but you don't. The only thing that you're worried about is just getting caught in the act. Of having your hand in the kitty. But it is OK that the kitty-- that the cookie jar seems to be emptier than normal. The game does that well. What's really sort of vague, deliberately vaguely represented in the game? AUDIENCE: All the actual bank mechanics. PROFESSOR: Can you give me an example? AUDIENCE: Like the whole investing on [INAUDIBLE] thing. It's probably a pretty good simplified version of it for the game. Which is fine, because that's not the main focus, it's not to make smart investment choices. It's to embezzle things. PROFESSOR: The odd thing about it is that the actual operation of the banks is remarkably trivial. AUDIENCE: Right PROFESSOR: Or it has been trivialized to the point of-- that's the point of this game, is not to run a good bank. AUDIENCE: I also don't know how this works in the real world, but the government bailouts in this game seem like candy, basically. You just, oh I want a government bailout. Oh, OK. Government bailout? Sure, we need a government bailout like [INAUDIBLE] in the financial crisis, and all the banks were failing. The auto manufacturers started asking the government for money. [INAUDIBLE] so Ford decided they didn't need it, but GM did. [INAUDIBLE] PROFESSOR: Again, it's a deliberate choice, right? I think what you're saying is that yeah, it is ridiculously trivial to get a government bailout. It's clearly designed, to me, to make you think that yeah, the government is going to come in and take care of this. Even though I'm in a hole of several-- $50 million dollars or something. It is like, yeah, the government will take care of it. So, I had a similar discussion with all of the other teams. In Tulipmania and Kolejka, what are the things that are really in high resolution? Everything that you can possibly do in standing in line is-- it is what that kind of game is about. Whereas things like money, for instance, or, oddly enough the way how the black market played out in Kolejka was less in the mind of the group that was playing because everybody had stuff to do. We didn't really have to stand in line but that might change with a smaller number of people. And Tulipmania, what did we talk about, the stuff that was really high resolution? AUDIENCE: Trying to encourage the buyers, or use the buyers to [INAUDIBLE] PROFESSOR: Right. So, getting other people, who have money. Why even play the game? These are not [INAUDIBLE] to put money down on the investment that you are telling them to make. It is really what they whole game is about. There is a lot of detail about how that works. It is all about. speculative activity. Whereas things like the actual prices, and where the bubble decides to burst. Those things are simplified to the point where the players can make a decision about it in a way that maybe real people can't make a decision. OK, so hopefully that gives you ideas about the way-- how you can play around with the level of fidelity in your game. And now we're already down to the last half hour in class, so let's talk a little about your assignment 3. First of all, has anyone been thinking about this? I know you guys want to get back to your game. Has anyone got an idea about assignment 3 that you want to pitch to your teams-- to the class now, to be able to form a team now? It's probably a good time to do it. It doesn't need to be fully fleshed out. AUDIENCE: I have an idea for a board game based around the first triumvirate, where the players are on either [INAUDIBLE] And-- PROFESSOR: The Roman triumvirate. AUDIENCE: And basically, all the players just want to want to march on Rome and become dictator there. But there's other players in the way. And also the Roman establishment, the senate doesn't want a dictator. And so you are trying to campaign to acquire more soldiers, and more influence around so that you can overcome them. You are also pushing your luck and taking riskier plays, so you can do this back to the other players. And, eventually, someone gets far ahead of them. Or someone or something doesn't pay off and they get killed. And then the whole thing [INAUDIBLE] and the players start fighting each other. PROFESSOR: So it's kind of tenuous cooperation. AUDIENCE: Yes PROFESSOR: So it's an interesting-- certainly a lot, a wealth of material to look at, when it comes to the fall of the Roman republic. Basically, if anyone points to people who-- the individual people caused the fall of the republic [INAUDIBLE]. Obviously, [INAUDIBLE]. That might be, if you are interested. Or if you don't actually know that much about ancient Romans and would like to, that is probably a good project to get into. I can't think of any better way to learn about a topic than to try to make a game about it. Anybody else? AUDIENCE: So this isn't a complete idea at all, and I'm probably coming about it from the wrong direction, but I would be very interested in finding some historical event which-- or some role where the person consistently felt that things were out of their control, and kept getting pushed in the same direction. And then make it so that the game always does the same-- it always ends in the same way. So it is a deterministic game, apart from how you get there. So, make it so that, for example, there's two players. One of them is guaranteed to be the winner. So make it actually follow history, and then don't tell the people this. Just have a game where eventually people realise that this is a game where, hey, this turned out remarkably like actual history. And then they play it again, and it is still remarkably like actual history. And then they eventually realise that you are playing through the motions. PROFESSOR: You could certainly make a game where the governing forces are very powerful, so it's going to be playing out in pretty much the same way each time. But then, the question is, what is the agency that you have in there? What does [INAUDIBLE] take to be able to influence things. So even a game like [INAUDIBLE] is not a good example of this, but it's a game that's based on the events that are actually in a deck of cards. And so what you do to play out, these are things that actually happened in real history and they may not be in the right order, but they will always happen. So it ends up being what kinds of decisions you are to make in all of those circumstances. [INAUDIBLE] slightly different parameters. AUDIENCE: Most operational-level war games do exactly that. The game is not about getting the Germans to win or winning Vietnam, it's more about what are the various strategies you have to use at the time. But more than likely, though, the game is set to end the way history does-- played out. So they're really complicated games. There's some less complicated ones, so if you end up going down that route, I might have some card-based versions of the game to show. PROFESSOR: You might also want to zoom in into the very-- instead of a wide swath of history, go really small. AUDIENCE: Most board games about battles will allow you to have a battle a different way. But anything operational-level tend to be very much about you just exploring what is going to be in that. PROFESSOR: That's true. So you've had your hand up forever, and then we'll go back this way. AUDIENCE: I have a separate idea, but just for that idea, maybe the Russian Revolution. Everyone was just caught up in all this craziness. It's like-- the game is basically forcing you that your revolution is going to lose and another one is going to come in. But along the way you are trying to set up your perfect table. PROFESSOR: You could imagine a game where there's already this tidal wave of change happening. You don't stop the wave, but how well can you ride that wave. AUDIENCE: But an idea I was thinking of was the dot-com bubble, back in the late '90s, where you would be some startup with a ridiculous business plan who is trying to acquire as many users as you can in a short amount of time. Get an IPO, got a lot of money from VCs, and then get out before everything crashes. So, I guess, about a bubble like Tulipmania is, but a little different in terms of the mechanics. How do you deal with the bubble? TEACHING ASSISTANT: For that one, think about what the people who are going through that. What do they actually know about the bubble. Did they know that the bubble was going to happen or not. At least for this assignment. It's about being in the perspective of the person at that time, in that real time. What about it is hindsight versus what they thought at the time. PROFESSOR: Yeah it's easy to make a game-- we have had games in this class that were also about the dot-com bubble A lot of it was making fun of how ridiculous it looks like now, in hindsight. But it's good to try to think, what information did they have at the time? It looks like-- did this look like it was going to go on forever? AUDIENCE: I have three different ideas. Most of what I think is I want to make a two-player game, where one person is in a position of power and the other person is working for them and otherwise subversive to them. And the person in the position of power has the ability to [INAUDIBLE] the other person. And what the other person can do is-- should they try to overthrow them? Does this-- How does this describe their life? Another idea I had was a love affair thing, a three-player game. I'm not sure that a three-player with 12 pairs. PROFESSOR: Maybe you can identify a specific three people AUDIENCE: Yes, maybe like Yoko and all the Beatles or something PROFESSOR: Yeah, that would go. AUDIENCE: And then another one, [INAUDIBLE] A game about feminist actions, on how they're all not quite the same thing but they are arguably minor points. [INAUDIBLE] PROFESSOR: [INAUDIBLE] by in time, because will be [INAUDIBLE] AUDIENCE: Cool, so I have two vague ideas. The first one, I am really interested in technology that changes the way people live their daily lives. So, when you look at 20th century, we had personal computers, so it could be like you're trying to hack things together in garages in Silicon Valley in the 1970's. Another idea is being Henry Ford, and building cars or bringing cars to the masses. Maybe you are a plant overseer and you are a line or something. Another one could be you are the Wright Brothers, inventing airplanes. It could be an experimental game. Another idea I had was that you are in the 1500's or the 1600's, and you are a merchant. You've got your ship, and you need to go buy stuff and bring it home and sell it. But the complication is there is-- you know how those old-time maps had sea monsters all over them, because the ships would disappear? But to a sea monster that ship is really good, am I right? That was the only rational explanation. I'm thinking that, [INAUDIBLE] What's that? When you go off the Earth. PROFESSOR: This is after 1492. AUDIENCE: What's that? PROFESSOR: This is after the circumnavigation, and the world is actually round. But they could be [INAUDIBLE] AUDIENCE: [INAUDIBLE] Here's your home, here's the place where all the resources are, and you have to get the resources back. But there is a sea monster in the middle. It's a path-building game, where there is a random aspect of the sea monster and if you [INAUDIBLE], you're dead. But if you can survive, you will profit. PROFESSOR: So you're a ship captain. AUDIENCE: Yeah, you are a merchant ship captain. PROFESSOR: Everyone had a chance to say, who had their hand up? AUDIENCE: [INAUDIBLE] you're getting data on which sensor is appearing, and you're trying to figure out where the monsters are on the map. So you're getting this information that-- [INAUDIBLE] really make it obvious that that's not actually what's going on by rolling dice or something. Or doing something very clearly random for whether-- which ships out of a particular set come back and which don't, based on where they were going. And then somehow you are supposed to make your case that, this is exactly where the monsters are. My map matches the data, and these other players maps don't. PROFESSOR: You want to be looking at cartography technology that was out there, or what sea-going timepieces look like at that time. Navigation, but also how does society of information ran at that time. Was it all just the royal society of blah, or was it more entrepreneurial? TEACHING ASSISTANT: So for that one, possible primary sources if you wanted to keep the fantastical element, but still make a little bit more of realistic of the time Umberto Eco wrote a book, Baudolino, about the mystery of Prester John, the Prester John who went from Europe to bring Christianity to the Orient. But it is a book about the Orient It is a book about map making, cartography, that kind of knowledge sharing. And the idea that if you just say something existed, it existed for real. Enough people believed it, the people in power believed it, so that it might as well have been real. And that's the kind of-- that's the weird kind of world they were working in the 13th and 14th century in Europe. It was that you had these itinerants come around and say, yes, I met Prester John, here's this map, and by the way, here's a relic of Jesus, or here is a relic of some other saint. And just by saying that made it-- gave those objects power, gave the person power. At least for a little while within that little burg that he was traveling through. AUDIENCE: I was also going to suggest a cartography kind of game, or an explorer trying to map out something. PROFESSOR: So map making before modern technology is scary seeing that one of two things could proceed. Anybody else have ideas? It doesn't necessarily need to be game mechanic ideas, it could be just time period and person ideas. AUDIENCE: So I don't know if I want to do this or not, because it is really controversial. I'm from North Dakota and I have seen a lot of the Native Americans struggle and I've read a lot of their history. So I was thinking about making a satire, Like one person is the Native Americans and one person is the United States. And just to illustrate the crazy injustices. But that is kind of heavy, and I don't know if I want to explore it. But there are ridiculous things that happen. PROFESSOR: Well it's not that far off from of the ideas you had about two very different levels of power, right? And that also goes to very different levels of access to information. So, maybe you should your group and see if those ideas work out. AUDIENCE: Not really fleshed out, but I was thinking about looking at a mass murder in history. PROFESSOR: OK AUDIENCE: Just any one of them. Trying to make a game out of it. PROFESSOR: OK, so mass murders on the scale of armed conflict-- armed public conflict, and mass murders of the serial killers sort, where it is a mystery. Do you have a preference? AUDIENCE: I'm thinking serial killers PROFESSOR: All right, there are a couple of famous ones. What sort of roles? Like the investigation part of it, or the try to get away with it kind of thing? AUDIENCE: I think the trying to get away with it thing. All right? TEACHING ASSISTANT: I think you might get visited by the FBI. AUDIENCE: Have people watched the Meet the Team videos for Team Fortress 2? STUDENT: So one of these videos is about the pyro, he's one of the characters. And in that video it shows the pyro's worldview as Candy Land, basically. And this is a guy who goes around and burns everything. And I'm thinking something-- just in response to what you are saying about a mass murderer. If you can find a mass murderer who is historically known to be crazy [INAUDIBLE]. And then you make-- you could have two different boards the way that there were for the [? feed ?] game, with the last project. And you could have one of the boards be very clearly fantasy land. This guy is going after something, he's trying to find something. I don't know, trying to kill women or something. So he's going around slaughtering them, and on the other map it is the real view of what has happened there. And then you can have whomever is trying to stop see the completely different view on the situation. AUDIENCE: So similarly related to that idea, a good example [INAUDIBLE] but it [INAUDIBLE] There is a book called Devil in the White City. At the Chicago World's Fair, they were building this beautiful city, they wanted to show off to the world what America could do. There was all this modernization, and showing off the electric lights, and how exciting that was. They built this beautiful, huge white building. They put up the whole thing. Then, in the shadow of this, there was a serial killer who was exploiting the fact that there were migrant workers there all the time trying to build this. Which helped obfuscate his very criminal activities. TEACHING ASSISTANT: That's a great [INAUDIBLE]. It also keeps in the spirit of the assignment, too. Just thinking about how you've got this new situation, you've got these new systems involved, and how are you exploiting the system various ways. For anything [INAUDIBLE] I would be really, really cautious about. And even with the Native American aspect, or any kind of system of oppression, be really cautious about representing the person being oppressed. If you're putting it in a point of view of the oppressor, just be really, really careful about that. Actually, I personally think it's more interesting to be in the point of view of the person being oppressed, and how they're responding to that situation. PROFESSOR: That's a group [INAUDIBLE] who has done a couple of games on systemic oppression. That every time there-- if you look at any historical case of human-- a period of human oppression, you can always find a system underneath it. That system actually enables people to more easily inhumane to each other. Both as game designers systems are very practical tool and look at that. But just be aware that exploiting the history of people, who are oppressed, by the way, by taking their culture away from them. Try to do it with some respect. Try to bring some empathy into it. This is what it was like. And the limits that that they had on the kinds of decisions that you could make, for instance. AUDIENCE: A lot of historical games are very asymmetrical. Are the ones that are [INAUDIBLE] I don't know of any simple historical games [INAUDIBLE] All those historic games that are good, and simple, don't have these imbalances in power between the players because that requires very complicated rules make sure that it works. PROFESSOR: It is a huge design challenge, for sure. It is not going to be easy. It's AUDIENCE: Is The Apprentice, Irish game, asymmetric? PROFESSOR: Yeah that is an example. That's an interesting game because-- so there is a game [INAUDIBLE] Romero. Has the designer of many many famous computer games, including the Wizardry series all the way back in the '80s. But what she's more often known for nowadays is a series of games that she designed for museums on various systems for human oppression. Trail of Tears. She did Train. She did a game about he English occupation of Ireland. The way that game deals with it is that all the players are symmetric. All players are different Irish armies. But the system is playing against you. The system is basically this mechanical influx of the English taking more and more Irish land and cramming all of the Irish into smaller lumps of land. And people get pushed off the island-- forced immigration, basically. But that's another way to approach it, right? It's asymmetric but in a way it's easier to design, because it's the game system versus the players. And the games that those actually get the games is that is forcing the players to conflict with each other, because there are less and less resources for more and more people in the small space. AUDIENCE: I'm trying to think. Yeah, you're right. All the card-based two-player games that GMT puts out, like Twilight Struggle, they a lot of complex-- there's a lot of complex things going on underneath, and a lot of it is buried in cards. PROFESSOR: Well, I can think of games that are highly asymmetric but not incredibly complicated, and a lot of it [INAUDIBLE]. For instance, which is not historically-based, but whatever. It is about alien races, and each alien race basically has a rule they can work in a very interesting way. But everybody starts with the same core rules, plus this one thing that you get to break. Tammany Hall, which we didn't get a chance to play today, is another one where it actually comes out to every round that you play in a game, there is a particular rule that you can break because you are occupying a position in city government. They can think of it as that break on special ability and execute. That is another way to be able to think about it. So, we've already had The [INAUDIBLE] which was really more like two different games that are being played simultaneously. So if any of you were on that team, you already have a perspective on what it's like to design a game with magic information, and magic abilities for-- in different goals. It has been done in this class before. It is a design challenge, but you can do it. TEACHING ASSISTANT: The original Netrunner is still a little complicated in that, there are a ton of different cards that are carrying a lot of the stuff. But [INAUDIBLE] it is closer to what you do in this class. PROFESSOR: It's not like you've got a game that has different goals you already enjoy using [INAUDIBLE] AUDIENCE: [INAUDIBLE] Cosmic Encounters, but it's not balanced at all. PROFESSOR: No it's not. It doesn't need to be. That's not a requirement of this class. The class is something playable and engaging. I never asked for balancing. First of all, the time frames that we're giving you to completely finish these games is a little bit unrealistic for balance. TEACHING ASSISTANT: Iteration. PROFESSOR: Visibility-- TEACHING ASSISTANT: Iteration gets more balanced as time passes. PROFESSOR: Yes TEACHING ASSISTANT: But at the end, we still don't-- we're not really grading on balance as much as we're grading on iteration and-- PROFESSOR: I think these like usability is something that we are going to look at. Do understand your rules is more important than how well your rules look. If your rules completely break, that's a problem. But if your numbers are off, sorry. A few percentage points or a couple of [INAUDIBLE] or a couple of [INAUDIBLE] even. But I don't understand how your rules work, that means a lot more to me. AUDIENCE: Just throwing an idea out there. The role of the-- one of the people who really founded the board game industry. PROFESSOR: Do you mean the Parker brothers? AUDIENCE: So from the perspective of a game designer or someone who is really trying to make this business a business. PROFESSOR: Or you could be something like a Hasbro, that's just trying beat other people to the business. Or you can be something like the independent Richie Branson, trying to [INAUDIBLE]. AUDIENCE: I figure that will be closer to people's hearts, too. PROFESSOR: You have to do a little research, because we obviously haven't covered a lot of them. But it would be interesting research. That would fit in nicely with the time period that John was talking about, the 20th century, the mass productization of America. But also you probably do need to limit that to one country's industry. America is probably easy to do. It's a bit difficult to look at the whole world. [INAUDIBLE] The rest of class today, I know this group wants to get back to the game, you can do it. But time for you to talk around with your ideas. TEACHING ASSISTANT: Feel free to email your ideas the game design mailing list. If you need to find other people, we'll do another. We will do this again at the end of the day Wednesday, right? PROFESSOR: Wednesday. TEACHING ASSISTANT: And then, teams should be formed by Monday, because on Wednesday the 16th, that's the first presentation PROFESSOR: Yeah, the pitch. TEACHING ASSISTANT: The pitch. So for a reminder about that, we want to know about what primary and secondary sources you're using as your inspiration, what kind of game you think you might be making. Nothing set in stone, it's much more about, give us a pitch of why this thing-- this game idea is interesting, and what is the source of information you using to make sure that the game is realistic. PROFESSOR: But let me make sure that we get this April. 14th, no, April 16th is when you do the pitch part. On April 14th, you should already be able to talk a little bit about, this is the idea that we want to work with. These are the people in my group, these are some of the sources of information that we're looking at, because we'll have guests, and they will be able to give you some feedback on that. Both in the pitch and in the guests we're not actually so much grading you on the quality of your pitch. But we are going to provide you feedback on how you're pitching, so that if you have to do pitching in real life, in your career after graduation you can get some feedback on how you present yourself.
|
MIT_CMS608_Game_Design_Spring_2014
|
6_Constraints_and_Usability.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. AUDIENCE: Are you really wanting an excel style spreadsheet? PROFESSOR: A word table is fine. I would like to see a table. AUDIENCE: It looks more like a journal, I'd say, than a table. PROFESSOR: Yeah, that's OK. So it looks as if you converted your table, and then you converted the table to text, or something like that. And it looks like there's this paragraph on that. AUDIENCE: Yes. It's like the journal of our game. PROFESSOR: That's fine. As long I get a shape of the history of the game, and the reasons why you made changes. That's the thing I'm really looking for. The table format does make it easier for me to read. In the future, I would like [INAUDIBLE] things. But because I didn't make that clear in this assignment at all, as long as I can trace the history of the game, I am good. Other questions? If anyone has a copy of the syllabus, do check it out. I do think the one-page write up is due the following week, on Monday. This coming Monday. I'm actually not going to be here on Monday. I'll be traveling to give a talk, so Rick will be heading next week's classes. Basically Monday and Wednesday. So, I'll catch you-- grading might be delayed by a little bit. AUDIENCE: [INAUDIBLE] PROFESSOR: A little [INAUDIBLE] Yeah. That one in particular, [INAUDIBLE]. For the assignment itself, for the group submissions, I would actually like to see hard copies, so I can keep it all together. The game itself obviously needs to be a hard copy. But then if I can get a print out of your change log and everything is all in one place. Oh, by the way, on Wednesday, I will come in with a whole bunch of Amazon boxes. Old Amazon empty boxes, so that you can just put your games into an appropriate size box. If you-- AUDIENCE: [INAUDIBLE] PROFESSOR: Yes, after this talk I will go in and grab the prototyping cases. I'll bring them down here, and you will probably have at least an hour and a 1/2, if not two hours to work on them. So, today's reading was from The Design of Everyday Things, also known as the psychology of everyday things. You probably noticed we skipped from chapter 1 all the way to chapter 4. Part of the reason for that is chapter 4 is kind of a summary of chapters 2 and 3. Chapters 2 and 3 have tons of nice case examples. It walks you through the reasoning, explaining things in detail. Again, I will recommend this book to anybody who is doing any kind of design where he would be expected to touch your design. And it's really, really easy to read. Hopefully you enjoyed today's reading. So what I'm going to go into today is actually a little bit more detail about the stuff that wasn't covered in chapter 4. In chapter 3 in this book, Donald Norman actually lays out this-- his mental model of how people make decisions when they are confronted with a piece of technology or a designed object that they don't quite understand. We first saw this briefly in chapter 4, but I thought I'd like to go into a little bit more detail about this before going to the stuff that's actually in chapter 4. So he's got this basic idea that every time you interact with any kind of technology, whether that's a chair you've never seen before, or a car, or a computer program, you go through this loop. You have got some goals as an individual human being. Get a new microwave oven, I would like to eat something. And then you go through this loop of trying to figure out how this new microwave oven works. You have some sort of intention. The goal might be, OK I want to eat something. So my intention is to warm up this TV dinner. I plan. OK, the plan is that I've got to prep this TV dinner somehow. So I'm going to go through a series of steps. I guess I need to take the wrapping off. Maybe I peeled back the plastic a little bit. Sometimes I get confused, too. And then you actually have to peel it. And then, instead of peeling up nicely, it peels up into little strips. Has this is ever happened to you? And it comes up in your fingers. That's the execution part, right? So I know what I want it to do, but I actually have to go and do it. And then I get feedback. I'm not even talking about the microwave oven, I'm talking about just the TV dinner right now. I see the feedback. The thing peeled back, but it didn't peel back the way that I want it to. I see it. I have to interpret whether that's getting me closer to my goal. Well, all I really want is a couple of holes for the steam to come out. So OK I guess that's good enough. And I evaluate that as being good enough. So I can proceed to the next step on my plan of getting dinner. So my intention now is to actually put this thing into the microwave oven as I've planned. To heat it up for 30 seconds. Which means a series of steps to actually push buttons to heat it up for 30 seconds. I have to make sure I don't set it to 30 minutes, so I punched a button to make sure I don't hit too many zeroes. I see the feedback on the screen. I perceive whether that's what I want or not. Sometimes it's not what I want. I punched in 30 minutes by mistake. I interpret that 30 zero zero as 30 minutes, I evaluate that's not what I want it to do. And then I make another set of intentions to solve that problem. So this is a model. And this is actually seven steps, even though I've got eight boxes. One, two, three, four, five, six, seven. This thing here happens in the system. It happens with the thing that you are trying to interact with. You can expand it into what you think of as a computer program. This will be expanded into a huge algorithm, or the thing that the computer has to solve. And figure out what you were trying to do, and tried to do. And try to communicate back to you whether that was successful or not. That's not really what I want you to worry about right now. But these seven steps are all happening in the human. And all these steps are an opportunity for something to go wrong. Let me give you an example, from games in particular. These are some of the considerations that you might have in a game. There are-- say it's a competitive game. You. want to win, you don't want to lose. So how do I win? How can I win, gathering a lot of money or resources or getting more points, or knocking out my opponent. What are my options right now? Maybe it's a card based game, and the options all laid out in the cards that you're holding. Maybe you have the same three decisions every turn. So, given all those options, what do I want to do? And then you actually have to execute that. If it's some sort of dexterity game, where you have to flick tokens on a board, that's kind of tough. Maybe you just place the card face down. You need to make sure you pick the right card before placing face down and not placing it face up by accident. The game reacts. That could involve other people. That could involve a computer. That could just involve how the board looks. And then you as a human being now see the perceived state of the game, and figure out whether that got you any closer to where you wanted to be. You say, oh, I made that move. And man, that was a terrible move. And I know that because that took points away from me. So now I need to figure out what's my next step. I make a plan and go on, and on, and so forth. So, the idea that Donald Norman talks about, mental models, is basically this idea that the human beings who are playing your game have an idea of how your game works. And they might be right. They might be wrong. Or maybe somewhat vaguely right, but not quite there. Now as a designer of a game, then you are creating the system that players are interacting with. Everything that I talked about earlier was happening between the player system. But you're the designer who is trying to machine the system into something that a player can understand. That should actually sound familiar because we talked a little bit about this. Mechanic-dynamic aesthetic. If you as a designer, are designing a system, are creating the mechanics of the game. Players experiencing the aesthetics of a game have all kinds of resultant dynamics that are coming from both how your rules define what the player is going to do as well as how the player decides they're going to execute your rules. So it's this weird, horrible, second-order problem, which results in you trying to communicate to the player, this is what's happening underneath the hood and you really need to understand this to be able to make good decisions. So again, the player develops a mental model of how the system works by interacting with the system. The player tries to play a game by poking at it. And the user then uses that model, that happens to exist in the player's head, to anticipate what that system is going to do in the future if he did something differently. And that mental model gives the player an explanation for why the system is behaving in a way that it does. The designer also has a mental model. You all have an idea of how your teams currently work. Whether or not your games actually work that way, because if I read your rules I may interpret it differently. But right now you have an idea of how your game works because you've been working with these things for two weeks. And you are materializing that into a set of game mechanics, a bunch of rules. The player has to interpret, and then create their own mental model out of that. So you can imagine so many different possibilities of things just going wrong. I'm going to talk a little bit about that. So one of them are mistakes that a player makes. There are two different kinds of mistakes that I want to talk about and try to introduce this vocabulary to you so that when you're talking with your teams you can use clear language. If you have gone through CMS 611 before, you've seen a version of this presentation. I have had to change up the examples, so that these are board game examples. Flips are when the player knows what's supposed to happen and what they're supposed to do. The player just happens to accidentally do the wrong thing. Real world slips including-- has anybody tried to call somebody on the phone, dialed in the number, and accidentally dialed somebody else that really well? It's not like you didn't know each person's phone numbers. You just automatically started dialing the wrong number, which you knew you also knew very well. Driving, if you drive, you and you want to drive your friend's house you accidentally end at work because you just got into a routine. Halfway through a task you forget what you're doing. It's not like you don't actually know what your task is, you just happened to forget it in the middle of doing it. Pushing a similar button to the one that you actually wanted, that sort of thing. Dealing cards. Everybody gets five cards. How many have I got? That person has five cards, six cards, five cards, five cards, or something like that. People come up with systems like this to be able to help them keep track of how many cards they gave out. But it's not like you didn't know that you were supposed to do five cards, you just lost track. So that's a slip. That's not a problem of a player's mental model. The player, he understands how his game was supposed to work. Well, there are also mistakes, which are errors in the player's mental model. The player thinks the game works in a certain way and a game doesn't actually-- isn't actually supposed to work in that way. So this is a product placement shot of-- a beauty shot, of a beautifully hand-crafted, machined [INAUDIBLE] What's wrong with that? AUDIENCE: [INAUDIBLE] PROFESSOR: Yes you can't put cities on adjacent spots. I think you can't even put it adjacent your opponents. AUDIENCE: That's right. Two apart. PROFESSOR: Two cities cannot be right next to each other. It makes for a nice photo, but the person who set this up clearly never actually played the game. It's still beautiful work. That player has-- if the person was actually playing the game. This person have the wrong idea about how this game is played? That's actually sometimes perfectly fine. That's part of learning. You make a mistake. If the game corrects you, and then you learn how to compensate for that, or play differently, that's just the process of learning the game. People make mistakes while they're learning the game all the time. If you're playing a computer game, sometimes that's all the fun of playing the game. Trying to figure out how this crazy computer game works. Probing it, and then the game punishes you because that was the wrong decision. And then you learn not to do that and then you try a different strategy. In fact, the game where you never fail-- and it's actually fairly easy to make a computer game where you never fail-- isn't that interesting for a player because that means you can just use whatever strategy you went in with and it will always work. There's no reason for you to think that this game needs smart strategy or has any depth to it. Whereas, a game where you try out the strategy you came in with and maybe works a little bit and, then it stops working. And then getting feedback on the why it might not be working, well that tells you, hey he doesn't know there's something underneath the hood here. Maybe I can explore this game a little bit further. That was an interesting study from Jester Juul, who is a game scholar, who wrote the book The Art of Failure. One of the tricks that-- he made this really simple kind of snake type game where you pick things up and it just gets longer. And he ask people to just rate the game at the end of the game. The game was the same, no matter who played it. But some people played it better, some people play it-- played it worse. The people who never failed once in beating, I think, five levels of the game, rated that game as being lower than someone who failed once or twice. And somebody who played it once or twice rated it a seven or eight. Some people who never failed, that rated it a five. People who could never get past the first level will also rate it a five or four. So people who get frustrated if they can't find the right mental model to play this game. It's like, none of my strategies work. I don't understand this game. I don't get this game. This is frustrating. I hate this game. And the alternative is, I completely get this game. I saw right through it the moment I-- you put this in front of me. I beat it in less than five minutes. My strategy just worked. Eh, not much there. And the people who feel a little bit halfway but then managed to complete the rest of the game because they changed the strategy. Woohoo, that's a pretty good game. They gave me a challenge and I overcame it. So let's talk a little about feedback. How do we give users this type of really, really valuable information to correct those mental models? I guess I'm skipping to the right side of the slide here. So we need to tell the player that an error has happened, and we need to give them tools to recover. Often, in a board game or in a card game, that kind of telling the player an error has happened needs to be enforced by other people. So you need to give the other players tools to detect when the game is on the verge of crashing, when someone has executed an illegal move, and then give them some sort of structure to be able to correct that. I'll give you a bit more example about that in a second. The alternative of the process, you just let them make the mistake. Their location-- or, on occasion they are making a move that is an error, that is a mistake, because they don't understand how this game is supposed to work, but its a legal move. And then, as a designer, so you have the option of giving them tools to fix that problem. It's like, all right you made a suboptimal move. It isn't going to get you any closer to your winning goal, but I'm going to give you kinds of rules to be able to then recover from that mistake. Of course you could also just prevent some of those mistakes from happening, right? In the Settlers of Cataan board, they actually had slots of the right size to be able to put the pieces. You know that's a constraint, that's a typical constraint that prevents you from putting the wrong piece in the wrong place. Ways to be checked and confirmation-- again, usually tend to happen with other players. Or maybe a referee. Especially in sports, you want to think about the role of the referee and what the referee should be checking for. But if you're making a computer game then the computer is doing that for you. Let's talk about the kinds of error recovery. That's backward error recovery, which is undo, basically. I'm taking back that move. In chess, the rule is if you've got your hand still on the piece, you can still take it back. AUDIENCE: Not in tournament play. PROFESSOR: Not in tournament play? AUDIENCE: In tournament play, it's touch move. PROFESSOR: It's touch move? AUDIENCE: As long as your hand is on it If you touch an opponent's piece, you have to capture that piece. PROFESSOR: Oh, yes. AUDIENCE: If you touch your piece, then you have to make a legal move with that. PROFESSOR: Yes, you have to make a legal move with that. But you could-- if it was a knight, I move this way, or it could move that way. If I moved it that way I can still move it over there if I haven't let go. AUDIENCE: [INAUDIBLE] PROFESSOR: Yes I do hit a clock. Because in tournament play, especially when clocked, in timed tournament play, which is how it is usually done, there's still a penalty for that. You're still losing time, even if you're correct. But at least there's a very clear stated set in the rules of this is how you take that move back. And this is where you're not allowed to take that move back, which is basically when you let go of the piece. Forward error recovery is more about, again, giving people tools be able to compensate. Mario-- this is not a board game example, obviously. You jump, and you can change your direction in mid-air. So you can do a jump that you are not going to complete and you will fall to your death. But you can go backwards while you're flying in mid-air and then land back where you started. Again you lose time, but otherwise you haven't lost the game. So that's forward every recovery. You've made the decision, and that's a decision I wanted. And then you fix it. So you take a new action to compensate. Now affordances comes up in Donald Norman, but I believe he goes into more detail about it in chapter 2 and chapter 1. So we are going to go into different kinds of affordances and constraints. So affordances are things that are aspects of the thing that you're confronted with that invites you to do something. So we talked a little bit about affordances of materials. Glass invites breaking. Porous wood invites writing on. Door handles are one thing that comes up a lot Donald Norman. We have a door handle that is not all that different from this on the door in this class. And that handle, it's about hand-sized, it invites you to grab it. You could turn it, you could pull it, you could push it. So there are bigger constraints when you look at a door like that door, that tell you about what that function of this handle is supposed to be. First of all it's like a semantic constraint. You've been taught: that thing is a door. You know, you've seen a million doors just like that. And so you say, it's a door, it must open somehow. Have any of you just found a random door on campus that just doesn't open? It's not locked, it just doesn't open. Or maybe it opens into a brick wall. Yeah? AUDIENCE: [INAUDIBLE] No, not here. PROFESSOR: Right. Did it just not open, or does it open into something bizarre? AUDIENCE: Into a wall. PROFESSOR: A brick wall, right? Yes, that's betraying the semantic constraint. The door is not supposed to open it to a wall, it's supposed to open into another space that you can go through. It's a door, it must open somehow. And so you look at a door and see all the different ways that you can open it. Now in emergency doors, you have these things often called crash bars. And what you see over there is pretty popular, but not the only way to implement this idea. A crash bar is designed so that you make it very, very clear that this door only opens in one way, to try limit how much mental processing you need to think about because it's usually put on emergency exits. But this particular one, you could grab this. You can grab this with your hand. You could push it, you could grab it, you can pull it up, you can push it down. There is a logical constraint, the more you grab your hands on something like this, you really can't see the lock. You just see the door, and the door is closed. As soon as you grab this, you realize that this hinge is spring loaded and can only go down. So there is this logic, there's a logical constraint there. This is a hinge that can go up and down, but already in the up position. So in order to activate this thing, you push it down, which encourages pushing rather than pulling. So it encourages you to exert force in the correct direction, assuming that the door opens outwards. If it opens inwards, then you really put the door handle on the wrong side of the door. There are also physical constraints, back to the example I talked about, the Settlers of Cataan board. This is a different kind of crash bar but this one I think does a slightly better job because it actually kind of difficult to grab. It is actually pretty wide. And it's much wider and a lot of people's hands. It's really easy to push. So, just physically it makes it difficult for you to do the wrong thing, to pull out the door that generally opens outwards. But because this doesn't tell you where it hinges, it doesn't tell you whether the door opens to the left or the door opens to the right. If you push too hard on the wrong side of the door, where the hinges are, the door doesn't really open. I guess it would be hard to open, but if you just gently push it, it won't open. Which is why you get these, which is offset to one side. And there are cultural constraints that are associated with that. Like the blue slightly rubberized, grippy surface encourages you to put your hands on that. There's a physical constraint. Because it's off-center, if you push anywhere along this part you won't generate enough torque to be able to open the door easily. I'm not so sure about this sticker that's over here, but I do think they kind of informed you that this is actually something where you put your hand. because this is rubber and there is ink on there. This is rubberised, and it's coated, and it's clearly hand-size. It's inviting you to push it. So that is cultural because of the colors, of the material, of the rubber. That says, this is something for you to put your hands on. So there are four different kinds of constraints that I went through. There are cultural constraints, which is what I was talking about-- the blue. There's the physical-- the physical constraints are just making it hard for you to do the wrong thing in the same way that certain pieces in certain games are easy to pick up because you are going to be picking up a lot. Certain things are easy to roll. Those things that are hard to pick up because you're not supposed to be picking them up, maybe you're supposed to be sliding them around. There are logical constraints, where if you just look at it for a second, you will say well, it can't be this, therefore it must be that. And finally, there are semantic constraints of when I say something is a door, you expect to be able to go through it. Same thing for a game. If you see a door in a game, it is labelled door, you expect that door leads somewhere in the game. I think there is a version of Clue that I used to play when I was a very, very little kid. And if you remember the Clue board, it looked like a floor plan of a house. It has windows and stuff like that, and that implies that you can go outside of the house. But you can't really go outside the house, that would be breaking the rules. That confused me when I was five years old, because I expected that you'd be able to go through any door. A couple of other things you can do to help people understand your game, it's context. Where how things are placed next to other things that suggest how they're supposed to be used. This is Boggle, and when you open it, you get these two pieces. Well you actually get [INAUDIBLE]. But you get these things together, and as soon as you see this if you know what an hourglass is, you immediately know that this is a timed game. Whether or not you read it on the box that this was a game that was timed, whether you've ever played this game before. When you see a timer that's packed in the same box with things, that suggests that this is a game that you play with Time. So that's one way of thinking context. I'm going to go to a few more examples later on and I want you to think about how things are placed next to each other suggests how they're supposed to be used. Context also comes from how you name things and how you provide art. This is not something to worry about in assignment 1, but you're going to have to worry about in assignment 2. How many of you have played Diplomacy? Do you still play Diplomacy with the same people that you played with? AUDIENCE: No. PROFESSOR: You don't talk to them anymore? No? OK. That's the problem with Diplomacy. How good are you at playing Risk? OK, if you just look at a Diplomacy board or a Risk board, they don't look all that different. It's a world map. I think in Diplomacy it's a modern European map. But now, this game is a game of international intrigue, trust, and treasury, and called diplomacy. And this game called the game of global domination. Risk. Just look at the way they're presented. Why don't you tell me, what did these boxes tell you about these games? AUDIENCE: Risk has a bunch of cannons and [INAUDIBLE], so you're probably going to be fighting and attacking stuff. PROFESSOR: OK. Yeah, more conflict. AUDIENCE: Diplomacy, it looks like guy is hanging around. And there's a sword and wine glass, so maybe there's a bit of fighting, or a bit of wheeling and dealing, but it seems like mostly you'll be talking to people. PROFESSOR: OK, there'll be lot of talking in this game. AUDIENCE: I get the sense that they might leaders discussing a deal. PROFESSOR: There's a world map and a globe to remind you. Yes, they're making decisions on behalf of all the world. AUDIENCE: If anything, it seems like these are people behind the throne and making shady deals. PROFESSOR: Ah, shadow presidents, or kings. I guess at the time that this game is set-- the world of czars and kaisers. So you're making shady deals with authority to be able to determine the fate of the world. And Risk? Stomp over everything. I'm going to take my army and I'm going to meet your army and we're going to figure out who's got a bigger army. That's the game. That is another hint, in the titles Diplomacy and Risk. Diplomacy is a game about diplomacy. It's about talking to people and making deals, and breaking deals at opportune times. And this is why people who people don't often play diplomacy with the same people anymore. It is a great friendship killer. Risk is a game about-- I'm going all in! I might win this if the dice roll in my favor, so it's about risk. The game is about risking everything on big gambles. And it's actually a pretty good example of what this game is expecting you to do. So immediately, just by looking at the box, the games are already trying to condition you to think the way you need to think in order to well in this game. Let's see. Visibility. This comes up a million times in Donald Norman. And when you're designing your games, you need to think about how to be able to make what's happening in your game more visible to the players who need to make decisions about that. This is a very close-up view of the game called Cosmic Encounter, which we will play later on. And it has this Risk- like element that I am going to put my army of flying saucers against your army of flying saucers. But we're going to be fighting on multiple fronts at once, and that's actually going to be-- in the game, you can join forces. So I can keep up with you to go up against a planet that's been held by you. But somebody else is going to join in and try to help defend your planet. And so they have this system of stackable tokens to basically be able to then very, very easily see whose pile is higher. Right now, these are all the defenders, so they are all on this little [INAUDIBLE]. There is actually this big flat spike where you stack all of the attacking ships and you put them in the direction on the planet you are attacking. So to just makes things very, very clear, this is how big the army is that's attacking is, and this is how big the army that's defending is. Even though there are many, many, many, many different forces involved. So they're making visible what the current state is. Again it's kind of a diplomatic game of like Diplomacy. In the middle of every combat, you have both sides asking all the players not involved in the combat to please contribute something to this-- please contribute a ship or two to this attack. And they're making deals all the time. Even days are not really about visibility take advantage of visibility. The whole idea of Battleship is that you don't know where your opponents are. But they gave you all these useful tools, these red pegs and white pegs, and a whole extra grid to be able to keep track of where you bombed before, and where you were successful. AUDIENCE: Can you imagine without the second grid? PROFESSOR: Yes, it's possible. I mean, you would probably end up drawing your own second grid pretty quickly just to keep track of where the you hit before. There's other things up here, as well. [INAUDIBLE] actually that's what it's describing-- how big each ship is. You've got your own fleet to keep track of, what the possible ships are, and how long they are. This person uses white tokens to keep track of where the enemy has bombed, which is something you don't have to do in the game to do well. It is probably useful information, but you have the tools. The player has all of these tools to be able to keep track of that information. So this game has gone a long way to try to make the little bit of information that you do get over the play of the game-- the little of information that you reveal over the game as visible as possible, so that you can make the next logical decision as easily as possible. Previous user experience goes a long way in helping people understand games. And these are dice with nontraditional numbers that you expect to find on a dice. But you see a dice, you kind of know what to do with it. You pick it up, you roll it, and it will give you a number. And that number is a number that you have for that turn. There's also the little dot that distinguishes the nines from the sixes. That comes from other games. I think a lot of people see that from games in particular because I can't think of any other situations in real life where something might be flipped upside down and you have to interpret that number. In fact, there's a Kickstarter out, or at least that was. The Kickstarter for this project, we took a whole bunch of cards, and a whole bunch of different randomized things depicted in a photoshopped photo. So it was a whole deck of cards. I think it was a standard poker deck. But it also had all these other randomizing things. It had a dreidel, it had a dice with [INAUDIBLE] on it. It had different size of dices. It had one of those counting sticks. There's a lot of things here that I am not really sure if I could tell you the name of it, but you could probably figure out how it works. What does that look like? It looks like a compass. Now, knowing that this is a randomizing element, how do you think that will be used if actually had it in real life? AUDIENCE: It's a spinner PROFESSOR: Yes, it's a spinner. It's a [INAUDIBLE] spinner, in particular. You can spin for one out of six sides on the inside, and one out of eight, sides in the top. But there are a couple of key things, like the coins. You could use this deck of cards for coin flips, if you wanted. And there is also these weird fortune cookies. Anyway, it's a cute idea, but you don't-- I don't need to explain to you how to use this. It's just a whole deck with different pictures on it, and you know and you can use it to randomize different letters and different numbers. AUDIENCE: [INAUDIBLE] PROFESSOR: I forget the name of it. AUDIENCE: No, what was the point? PROFESSOR: The point was that you will get a deck of cards with every card is different. And every club looks kind of like this but it has different randomizing results on it. Cultural cues. This was actually played by a pair of friends. There is a game called War on Terror. There's a very satirical game. Basically, everybody has exactly the same strategy. Everyone's in charge of a country, basically. Everyone is after oil. Everybody has the same strategies and the same tactics. But if you cross a certain point, you get branded as evil and you have to wear the evil balaclava. And that immediately is a trigger for everybody to gang up on that person. So what the game is really trying-- the point of the game is that the game is trying to drive home the point that everybody is engaged. All the powers are engaging in exactly the same tactics. At the moment that you actually get declared as evil, you actually get a little bit more freedom than what you're about to do because you don't have to worry about other people branding you as evil. So the whole idea of-- here is this person in this balaclava. And it had-- it had the word evil on top to make it clear, but I don't think that was actually necessary. [INAUDIBLE] a regular [INAUDIBLE] I guess it caught the point that this person is now marked as the bad guy. It is actually a metaphor for something that-- I actually had some trouble coming up with board game examples. it happens a lot more often in computers, where you get some sort of metaphor to imply how it's supposed to be used. You're not going to exactly use it that way. So for instance, what are these things up here AUDIENCE: [INAUDIBLE] PROFESSOR: [INAUDIBLE] So what do you think you do if I'm in the game and in this piece of software? AUDIENCE: You're going to read the inbox or put it into the outbox. PROFESSOR: You can send messages, and read, and check what messages you receive. That's a metaphor that we keep it mail software today. Anybody recognize that? AUDIENCE: It's a Rolodex. PROFESSOR: Oh, people still recognise Rolodexes. That's cool. What does a Rolodex do? AUDIENCE: Contacts. PROFESSOR: Contacts. So that just gives you a list of all the people. this gives you an idea of how old this screenshot is. This looks like a planner, some kind of scheduler. Magic lamp. Genies. You click on it and can ask things. Help, maybe? Not quite sure. AUDIENCE: Wishes. PROFESSOR: Wishes, maybe. Wishes maybe connect you to tech support. I don't know. You can see the metaphor break down here. This is a phone? AUDIENCE: A fax, maybe? PROFESSOR: A fax maybe. I'm looking at this, and I do a little bit about the machine that this was designed for, and it didn't have-- you couldn't make phone calls with it. So I have no idea what this thing actually does. Maybe send pictures. This thing looks like a-- AUDIENCE: A purse or briefcase. PROFESSOR: A purse, or a briefcase or a bag. It's a metaphor or something, but it might be cash transactions. But I doubt it. It could be your files that you save, who knows. Things are starting to break down. But you still have this desktop metaphor that we still maintain in a lot of user interfaces. This is somewhat-- this is a real kicker. This is extra [INAUDIBLE] the hallway. I have no Idea what to expect the hallway. User interface. Maybe other applications. We have that a little bit in both games, especially board games that are produced in many, many, many different languages. Because they have-- they can't slap words on the tokens. Because if you slap words on it and want to then release the same product in a different language for a different country, then you have to change all the words and that increases manufacturing costs. So they try to use generic tokens and then just explain everything in the rules which they can print fairly cheaply, and they can include six European languages if the same box. That won't be a problem. This is a game called Agricola. And it's called a worker placement game. Because what it basically is, is that you've got a bunch of members of your family of your little farming homestead. And then you send them out every turn to do tasks, like go to the fireplace and cook some meat, or go to the market and buy some more feed, or build some fences. Something like that. So what you do is you take these tokens and you put that on the cart. So this is both context and an interaction metaphor. By taking this thing and putting it on this cart, I'm achieving quite a large number of different things. I'm saying, this is the action that I'm taking. I am saying I have one, two, three, four, five different things that I could do. Actually, I am not sure. but there's at least four different things that I can do on my turn. Every time I take one of these things and then put it on a cart, like the cart that you see you on the right, that means that that's going to be my action for this turn. The other thing that this game does is that once what I've decided that I'm going to do that, I think in Agricola once you take it, nobody else can take it. So I can take over the fireplace and no one else can take over the fireplace. I can put two people on the same fireplace,certainly. So that's a metaphor, right? Now it doesn't really matter that you-- you don't really have to count everything in family members. You could just say you have four actions that you can take per turn. And you just choose which four actions that you get to do, do all the math that is necessary to improve your farm and that's it. But they turned your family members into your action counter, into your action points. The more family members that you have, the more different actions that you can take within your turn. So your family members are your metaphor for action points. We get the same thing in other games with things like workers. Finally, just a very quick word about accessibility. This is usually something I go into much more detail in in my computer games classes, but with board games classes one thing I do want you to keep in mind is colorblindness. I want you to, If you have colorblindness, or if you have friends who are colorblind, you definitely want them to take a look at your design, and see whether it is possible for them to play your game. Take a picture through Instagram, or something like that, and put in a black and white filter. See whether your game is still legible and possible to play with all the colors removed. You can accomplish a lot of things by just changing up your tokens, by putting additional marks on the tokens that are high contrast so that they don't-- they are not completely reliant on color. If [INAUDIBLE] you can look at bright and dark as well. That's all. If you take that into consideration in your design, you will definitely get kudos from the instructors when they are grading. Because that means you have taken the effort in trying to increase the range of people who can play your game. So, that's pretty much what I have to say about usability for today. Any questions? This is not something you have to worry too much about for assignment one. But it's something that you definitely have to think about whenever you're trying to create a game for lots and lots and lots of other people to play. So you will want to definitely start taking this into consideration. What pieces are you choosing? What do the pieces tell you about how to play this game? How does your board tell you how to play your game, even if the rules are not printed on the board? What are the clues that you're giving the player to be able to form that mental model, so you can get them past that learning stage where they're just making mistakes and start making decisions about how they want to play the game in a way that's satisfying to them. And giving them feedback on whether those decisions works out for that. So that's the presentation. And now I guess it's team time to work on your projects. I am going-- Rick and I will go get boxes and we'll be right back. Anyone needs to take a break now is a good time.
|
MIT_CMS608_Game_Design_Spring_2014
|
7_Aesthetics_and_Player_Experience.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PHILIP TAN: Again, some of this might be familiar to folks that have taken CMS 611, but I revised this stuff so that I can give you more board game examples. I just want to go a little bit on the kind of aesthetic thing that we're talking about in Assignment 2. We'll go more into things about user experience over the next couple of weeks. But so far, we've been talking about this kind of aesthetic. It came up in the NDA paper, where we've been talking about mechanics, we've been talking about strategies that come up and the dynamics of the game, coming up. The way your rules play out to be able to give you a sense that some games are about subterfuge, and some games are about pressing your luck. There are some games about just dealing with random things that happen to you, and some games about dethroning the king. These are what a lot of your games have been about. Some games are about feeling smart. it's like when you make a set, it's like, oh, I feel really smart about that. And those come out from your rules. That's what Assignment 1 was all about. You've done a great job with that. But of course, that's not the only thing that's involved in the aesthetic of a game. We're talking about these things as well. All right, we already talked little bit about game bits with the Donald Norman discussion. But also things like the tone of the writing in your rules. In Assignment 1, it was fairly straightforward. But you could imagine writing everything in Ye Olde English or-- I think Space Alert is a game where the rules are written as if it was a cadet manual, only a cadet manual that is setting you up for a mission that is doomed to failure. Space Alert is actually a really good example of sound as well, and I'll play some tracks from it. It's a game that has an accompanying CD. But what other games have really distinctive sounds, like board games, card games, casino games? Like what are the sort of things that you-- AUDIENCE: Operation. AUDIENCE: Operation. PHILIP TAN: Operation? Yes Bzzt. Yeah. AUDIENCE: Sorry. PHILIP TAN: Sorry? Because you say sorry? AUDIENCE: No, I think the whole thing is-- AUDIENCE: Yahtzee? [INTERPOSING VOICES] AUDIENCE: Yahtzee? AUDIENCE: Taboo? PHILIP TAN: Taboo has a buzzer. But also a lot of-- It's a dialogue game, so that's also an accompanying sound that's coming out of everybody's mouths. AUDIENCE: Gestures PHILIP TAN: Gestures? AUDIENCE: Gestures. PHILIP TAN: Oh, I haven't played that one. AUDIENCE: It's like charades, only that you've got a timer. And the timer has a really loud tick to it. PHILIP TAN: Oh, so it ticks down so you can hear it running out of time. AUDIENCE: Jeopardy. PHILIP TAN: Jeopardy. [HUMS THEME] OK. AUDIENCE: Poker, where they shuffle the chips all the time. PHILIP TAN: Oh, yeah, yeah. That's the little tinkle of the chips and-- AUDIENCE: Slot machines. PHILIP TAN: Slot machines, of course. Yeah Or rather, a whole bank of slot machines. You very rarely hear a single slot machine operating by itself. Electronic slot machines, obviously, have a slightly different tone to it than mechanical slot machines. Roulette, I find, has a very, very distinctive sound. Just that gold ball bouncing all over the pumps. So definitely think a little bit about that, about the sound of a game. Especially if you're making a game that is about conversation or about over-the-table. exchange. Taboo is a game that is punctuated with a lot of ums and ahs, whereas something like charades, the sound of that is a lot of shouting, usually. Everyone just saying random things, the first thing that comes to their mind. It's a free association game. And, of course, the look of your game. That should be taken for granted, and we've got a couple of examples of various boxes that you take a look at. And of course, that's what people normally associate with storytelling and games. But, it could also be thought of as world building in games. What's the theme of your game? What's the plot and the characters in the game? Are you making instead of the Trivial Pursuit game, the Twilight Trivial Pursuit game, or something like that. And you've got the setting, you've got characters, even though the game mechanics are all the same. But the games that integrate better with the game mechanics and rules and art direction, and games that do a poorer job, we've got some examples of that. So, let me see. Let me talk about a few of these games. How many of you play Forbidden Island? How many of you play Pandemic? OK, so Forbidden Island is by the same designer, and it still has a lot of the same cooperative feel. First of all, you have this metal tin that makes you think that this game is worth more, and you could be paying more for it. But you've got this nice little art, and because you have to open up this metal tin, makes think a little bit about an opening up a treasure chest. Inside, you've actually got these nice treasure pieces. A goblet. This kind of translucent crystal. They're all plastic, kind of rubberized plastic so that they will last a little bit longer. But you can see that this has got [INAUDIBLE] lion, griffin, or something. You know, its got some of that old world, here's some sort of mystic civilization. And when you look at the cards, you've got the choice of the fonts. You've got these kinds of very serifed and a little bit broken-up text that looks like an old letter press. And, of course, all the art that's on every single card. The game mechanics of this game are about various parts of an island turning into water. And they don't actually have different illustrations when they get flooded. They just have the same picture, only bluish. It's cheap to print this because its monochromatic. I guess it's not really monochromatic because they have a color token at the bottom. But it gives you the idea that, when you play this game, everything looks nice and colorful. And then as the game goes on, things start turning bluer and bluer. So it gives you the sense that there's more and more water surrounding this island as it keeps flooding. And all these pieces, which represents-- each one is different players' token, doesn't necessarily need to be shaped human. But it makes you think that there is a human head on the piece. All this adds cost to the production of this. Of course, the metal tin also adds a cost to the production of a game like this. But it goes a long way to making you feel like, I'm not just moving pieces on the board. I'm actually exploring an island and robbing it of its treasures. But it's trying to get that. This is actually a counter of the flood levels. And the game mechanics stuff, says novice, normal, elite, and legendary. So it sets you up with a difficulty level when the game starts. But you have a water level that goes from bottom to top, as opposed to top to bottom. You can imagine this counter working just as well going downwards or upwards. But since it's about flooding, it goes from bottom, and then the water level rises and rises and rises. And of course, on top, they have a skull. It's not just that the game ends. It's the game ends and everyone dies. Let's just be clear about that. So it does a nice job of getting that across. It actually is a cooperative game and does a pretty good job of taking all of these ideas, the setting of the waterlogged island, the art direction of what happens to the pieces as you flip them over. And then, of course, the rules that determine when the water level rises and how much space you've got to navigate and how you deal with those problems. I'd like to describe all of the stuff that we've been talking so far as systemic aesthetics. So these are the aesthetics that the system is creating. All the things that have to do with art style, writing tone, the sound, the pieces that you're holding, more or less stylistic aesthetics. And then there's the fiction. The fiction will be the story, will be the characters, will be the setting of the game. But of course, it's a blurry Venn diagram, because there's a lot of things that come in-between. So, for instance, the way your board looks. It's, obviously, a combination of the mechanics that specify the information that you need to be able to put on the board. But also the style of how do you display that information and how that reinforces the setting that you want your players to be in. The kind of feedback that you give people, like the Taboo buzz, for instance. The sound of that buzz is in here. But obviously, the rules are set up so that, the moment you hear that buzz, it's kind of like it's as jarring and nerve-wracking as possible, because you're trying to concentrate on not saying the wrong thing. And the moment you slip up, another player will buzz it, and then you get the shock of your life. That's all intentional. For art direction, and for style and fiction, those tend to be things that people have less trouble thinking about. How realistic do you want your setting to be? There's a game called King of Tokyo, which is really, mechanically, it's everyone ganging up on the leader. And this is a game where you're trying to be a monster taking over Tokyo. And there's a lot of other monsters who want your spot. And once you are on top of the mountain, everyone's trying to take you down. That's what this game is about. But you can see, it's got this goofy, not quite anime style. But still evocative of old Japanese movies. You've got a little picture of Tokyo being on fire here. You can actually see the monsters in here, you see a lot of helicopters and everything, see all the collateral damage you're doing. I believe it's mostly a dice-rolling game. AUDIENCE: Yahtzee, basically. PHILIP TAN: Yeah, it's Yahtzee. So the mechanics are all about, you roll the dice and you try to roll well. So they put in a lot of effort to try to piece that mechanic inside this aesthetic and fictional elements to make it a little bit more engaging and, frankly, a lot more saleable. This is Richard Garfield, who's also the designer of Magic: The Gathering the original designer of Magic: The Gathering. So it's got a good pedigree. In between the systemic and the fictional elements, you can do things like events. Twilight Struggle, another excellent game, is about the Cold War, basically. And you've got cards in here that remind people of real things that actually happened. They don't necessarily happen in the order that they happened in real life. There are things like the Space Race. There's things like various precedents, and various Soviet and US leaders from the Cold War. This one specifically says 1945 to 1989. And so while you're playing this game, there are mechanical results for all those cards that you're flipping over, right? Because they also have pictures and names of the actual events that, hopefully, this is largely targeted at people who have lived through this era, or at least remember things that happened from that era. And it's trying to be evocative of that. Whereas it's a fairly mechanical-- I guess it's not quite a war game. It's more like a politics game. Balance of power game. But they use events to remind you of what the setting of this game is. I call it fiction. But of course, this is not fiction. This is based on what actually happened. It's still a setting. Roles and abilities. Back to Forbidden Island, every single character that you play in Forbidden Island has a different set of abilities that you've got. You're taking on a different role, like an explorer or a diver. I can't remember what all the roles are. There's no indication on the box on what you can be. But since everybody has different abilities, that hearkens back to the unique defining things of the character that you're playing. And you try to think in that role. If you are a pilot or something, and then you try to think about rescuing people. If you're a diver, you are trying to forge ahead and going and you can stay close to the water mechanically. But also puts you in a fictional position in your team to be able to be closer to risk, because other people will just drown if they're too close to the water. I think that's how the game works, if I recall. So for Assignment 2, we want you to be thinking about all of these things. Obviously, it's difficult to do this on the amount of time and the budget that you're working with. But we want you to consider how you game looks, how your game sounds, the setting that your game is in. We've already had a couple of things like the spies game, for instance. Could this have been a formation of a game about hidden variables? But then you have this layer of being an agent of an unknown country. And sometimes fiction can actually help people better understand the underlying rules that you've got. I know that this game, the level of the game started off as a building construction game. And to a certain extent, some version of those rules might actually help people understand your rules easier. But sometimes it gets in the way because your fiction may not necessarily match up with the way that your rules are evolving. I'm not saying that you should let aesthetics necessarily constrain what you can do with rules. But if you're going to change your rules, then you're going to have to iterate on your rules to make your game playable, and do everything that you've done with Assignment 1. You want to make sure that your aesthetic is coherent with that, so that you don't have things in your storyline or your character design or your art style that are outright contradicting what your rules are trying to tell you. This is a game about friendly cooperation, and you're saying all of you are on this island together and trying to save yourselves. But really, the way your rules work out is every man for himself. That would be a contradiction. This is an interesting game. Who's played Ticket to Ride? OK. Box design and title can play a very, very large part in setting that expectation. So it wouldn't solve something I stressed so much in the first assignment, what's your title of your game. But in the second assignment, the title of your game can sometimes be the first clue of what your game's all about. It's a rail building game. You can see from the back a picture of the board, and you're building these tracks. But if you actually read the box, it says, "October 2nd, 1900, 28 years to the day that north London eccentric Phileas Fogg accepted and won a $20,000 bet that he could travel around the world in 80 days. Now at the dawn of the century, it is time for a new impossible journey. Travel across the United States. The impossible journey. "So some old friends have gathered to celebrate Fogg's impetuous and lucrative gamble, and to propose a new wager of their own. The stakes: $1 million, and the winner takes all competition. Your objective: To see which of them can travel by rail to the most cities in North America in just seven days." It's interesting, because, at first glance, you look at the game mechanics. And the game mechanics actually feel more like you're building railway connections, rather than riding the rails. But if you look a little bit closer at the strategies of this game, what this game really comes down to is blocking other people from completing their routes across America. Which I think is closer to, I think, the theme that it's suggesting. It's not about who can complete. It's not about, is it possible for you to complete a route around America in seven days? No. It's who can do it fastest, which means how do we slow down everybody else? And that's actually what high-level strategy in this game turns out to be. So it's about all these tickets that you're picking up, and you're going from place to place. You're trying to compete routes, and you're trying to prevent other people from completing their routes. One of the interesting things about this game is that you've got all these people that are on the box. And, of course, the assumption is that you are one of these people. And that's deliberate. The board game designers wanted you to select someone on the spot that you could personally identify with and says, I want to be this person who looks like [INAUDIBLE], or this person who looks like Phileas Fogg because I like those folks. But then, when you actually play the game, there is no real embodiment of any of these characters. There's a little piece that you place down that says, I am this person. And the only sort of level of fiction that's represented in the rules are the tickets and the routes that you're taking, not the people. So they've gone to this very, very large effort to make you think that this is a game about people, just because they thought that games about people tend to sell better than games about trains. But the games are really about trains. So I guess [INAUDIBLE]. Let's see. You want to talk about these other games? RICHARD EBERHARDT: So Alien Frontiers. Does anybody know the term worker placement? Worker placement game where you have a limited number of resources that you're just basically applying to different sections on the board, which give you points or do things for you. It's a great game about colonizing this world. So a lot of really good, old-style '50s and '60s style of science fiction art. Lots of rockets, spaceships, things like that. And your actual bits, and the reason I'm using this one is because this is a game that starts off with a plasticky Euro style. So cubes and things like that. They don't really represent anything. Your bits, your ships are these dice. You buy a rocket, you roll the rocket, it gives you a number. And then using those numbers, you can apply it to various different places on the board. So [INAUDIBLE] are actually pretty good, because you can put the piece down, it stays there. If it gets moved around, it doesn't really matter if it changes number. Because if you use that number just when you're placing down the piece, the number is no longer relevant anymore. One thing we see in a lot of student games, people using dice as counters. A little bit easier with die 6 because its squared. But once you start using, like, the d20s as a counter, they've got really small facets. And they're going to roll and you're going to lose your place. Also, the amount of time it takes to go from one number to the other number really increases the times it takes to play the game. But one thing they've done, it started off with these little chips. Now they've added more plastic bits. Special powers you can get to move around that look zany. Little alien pieces of technology. And colonies, you can play, they kind of look like colonies now. I don't have it here, it's not released yet. But it's on order. They actually made the rockets dice, so the dice in the shape rockets. So imagine this as an oblong shape that's kind of curved with cut faces on it. One thing I'm curious is, how is it going to roll They've made it so that it feels more representational. Like it feels more like part of the theme, but it might actually take away from the actual feel of rolling dice. It'll be interesting seeing how that plays out when I play that comes out. PHILIP TAN: When you actually role the dice, it has its own distinctive sound that is shared among people have positive associations with that sound. RICHARD EBERHARDT: And then the other one, a game about trucking. AUDIENCE: Heartland. RICHARD EBERHARDT: The Great Heartland Hauling Company. So on the box, you see a semi truck carrying little squares. On the back, you see semi trucks and squares. But the squares are in a map. But the squares don't really fit on the trucks. So, actually, I brought this out thinking, that's a great way of saying it's not-- Kind of like the other one, they're showing something on the box that doesn't actually exist in the game. But I found out, and I actually took it out and opened it, they have cards where you put your bits on. So you are kind of carrying things around. But there's a disconnect between where things are on a map and where things are on your truck. And then where your truck is actually placed on the map. So your eyes are going to different places when you're actually playing out the game. Largely done because of the commercial manufacturing constraints of it. As you can see, it's basically just two decks in a small box. You'll see a lot of these games, they want-- If it's a game I want you to play in a short period of time, it's likely going to have a smaller box than a game that's going to play longer. It's a code for how complicated the game is, or how much of your time the game is going to take. PHILIP TAN: So definitely pay attention to how your pieces feel. Again, we're not asking you to go completely to town, like laser-cutting pieces or anything like that. That's not necessary. Just that every time you think of the pieces in our box, or we decide to glue two pieces together or to build, make it a different shape or something, think about what that's communicating over to the player. Think about what the box may be, or what the art in the rules is conveying about how the game is supposed to be played, and try not to mislead your players into thinking that this game is really about stacking things on little plastic trucks when it really isn't. It's about stacking things on cards. That sort of thing. So, yeah. Any questions about Assignment 2? AUDIENCE: And then on the assignment, it's not asking for polished art? PHILIP TAN: Yep. RICHARD EBERHARDT: So think about the materials, think about the textures. It's still a short assignment, so don't think about that kind of art. PHILIP TAN: Yeah. RICHARD EBERHARDT: More which kind of pieces you decide to use. Really think about the title. It's really, really important. PHILIP TAN: Yep. The title is something, writing, it's something that you can spend-- you can get a lot out of a little bit more time investment. The tone of your writing and your rules can go a long way. But still, looking for rules that are easy to read. So don't use Shakespearean styling or something like that. That might make things a little bit tricky, unless you're very, very consistent with it throughout, and you're only using it for nouns or something, which could work pretty well. Sketchy art is fine. Just like a hint of what you are trying to get across with your art. We're not judging anybody on your ability to draw. But we are interested in your thought process and how you consider, how you represent your game. All right. AUDIENCE: So the first line of the assignment [INAUDIBLE]. Do we get to choose the experience that we want to-- PHILIP TAN: Yep. And that comes down back to the overall aesthetic. All of these things are working together to generate a player experience. And we'll be going through some examples of other games that generate the user experience of panic. The experience of close shave cooperation. Again, remember, mechanics and aesthetics. The aesthetic thing is the player actually experiences. And that's true for any definition of aesthetic. The question is, what are all the things that you can do to hopefully generate that. Of course, you only know whether they're hitting that [INAUDIBLE]. AUDIENCE: Is it supposed to be a quick game? PHILIP TAN: Do we have a time limit in the-- RICHARD EBERHARDT: I think I just copied it over. 20 Minutes. PHILIP TAN: 20 Minutes So aim for that. It's a little longer than the last one. But yeah. So, yeah. And the reason for that is largely, we want it to be able to grade these things and get your grades back to you. So if you made a two-hour long game, and everybody makes a two-hour long game, we're never going to finish grading this. So short games help. And short games are easier to test. RICHARD EBERHARDT: And for the first question, on Monday, we'll be talking about how to choose an experience [INAUDIBLE]. We'll do that right before we get into brainstorming and team forming. PHILIP TAN: Cool. All right. I guess we're ending a little early today. All right. Thanks, everyone. Don't forget to sign in if you haven't already. RICHARD EBERHARDT: Just about everybody did. PHILIP TAN: Thank you. Oh yeah. Please hand in your
|
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
|
Lecture_4_Langmuir_Probe.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So today, we're going to be talking about Langmuir probes. Now, I'll freely admit I find Langmuir probes a little bit confusing. I would say that you need to know a lot about sheath theory to properly understand the Langmuir probe. And to do that in a single class is very, very challenging. So if you look in Hutchinson's textbook, he takes a significant amount of time on these probes. He does that because he really likes probes, and that's very reasonable. Everyone has their little biases. I'm not going to spend that much time. And what I've done in particular is I've tried to rework some of the material in Hutchinson's book in a way that makes more sense to me. Now, it may not make more sense to you, so this is why I strongly encourage you to go have a look at Hutchinson's book as well and maybe interpolate between those two. And I also included in the syllabus a reference to the book by Lieberman and Lichtenberg, which is very good on sheath theory. It's a book on low temperature plasmas where sheaths are very common. So if you are a little bit weak on sheath physics and you want to review that, I recommend that book. Now, one thing I will say is that everyone uses different notation. And in an attempt to make things clearer, I've also used a third and different notation from that used in Hutchinson and Lieberman and Lichtenberg. So good luck with that. I think mine is less confusing than Hutchinson's, but of course it's that classic thing about there are too many standards. And now I've invented a new standard. So just keep an eye on all these quantities. I'll try and explain what they are. If you're not sure about any of them, please shout out. So the Langmuir probe-- it's a very simple probe. It's simple because all we take is a conductor, some sort rod of metal, and stick it into our plasma like this. And then we measure the potential. I'm going to be calling v 0 here. This is the potential on our probe. And so if we think about what this looks like, we can draw a little probe. Maybe it's a little rod of metal like this. And it's surrounded by plasma like this. And we know from our intro to plasma physics, whenever we see a conductor in a plasma, there's going to be something forming around it. So if this is the plasma and this is the probe, then this is a region that we call the sheath. And so we're going to have to understand how sheaths work. But before we get fully going with sheaths, we're just going to give an intuitive explanation of why we're going to get a sheath. And then I'm going to give you a very brief overview of the sheath results without doing any actual formal derivation of them because it will take too long. So at the instant that we stick this probe into the plasma, it's going to be immersed in a bath of electrons and ions. And so there are going to be electrons and ions striking this conductor. And we're going to assume that when they strike this conductor, they stick to it. So effectively, it starts to gain charge. So the first thing we want to know is what is the flux of particles? So this is a flux capital gamma of particles of species, J, that is striking against this probe here. And we're going to make an approximation that this probe is a planar surface. That means we pick up a factor of 1/4, just in geometry. And that is going to be 1/4 of the number of probe particles times by the average speed that the probe particles are going at. So this is the average particle flux. I haven't said anything about charge yet. I just want to note here, I'm going to be using u for velocity, and the reason is that there's an awful lot of v's that crop up. And I think using a lowercase v for velocity and an uppercase V for potential is going to cause a huge amount of confusion. So this is the first place where I diverge from Hutchinson. So just keep that in mind. AUDIENCE: Where does the 1/4 come from? JACK HARE: The 1/4 comes from geometry. If you go look this up in some standard statistical mechanics textbook, you'll find out it's because you've got a load of particles moving in random directions, and you've got a plane here. And so it's the number of particles that have a velocity, which is directed through the plane that you're talking about here. So it's a geometric factor. I haven't derived it. AUDIENCE: Ok. Thanks. JACK HARE: Cool. And so that means that the total current that this probe is instantaneously collecting as soon as we stick it into the plasma is going to be equal to the particle charge times these two fluxes. So we're going to have-- I'm going to put a minus e at the front here. We're going to have an area which represents the surface area of the probe. We're going to have our 1/4 that we have for both species, and then we're going to have a term that looks like ni ui minus ne ue here. So that's the current that we're instantaneously collecting. Now, it turns out, of course, that the average velocity of the electrons is going to be much, much bigger than the average velocity of the ions in general, and that is just a fact because the electron mass is much, much less than the ion mass. And so if they've got the similar temperatures, the electrons are going to be moving faster, which means that at least as soon as we stick the probe into the plasma, this current is going to be completely dominated by the electrons. So it's going to be 1/4 eA ne electron density, ue, like that. And almost all of the particles which are colliding with the probe are going to be electrons. So this means that the probe charge is negative because it's collecting loads of electrons. And it keeps going until all the electrons are repelled or until a vast majority of the electrons are repelled. And it will keep going until we have no current. And by the time that the probe has reached a situation where there's no total current, we say that the probe has reached a potential, which we call the floating potential. And it's called the floating potential because if we just stick the probe in the plasma, we don't try to bias it at all. And we just let its potential adjust until it reaches this i equals 0 level. It will reach some potential. We just let it float to that potential. It's a floating potential. But the key thing here is that this is not the same as the plasma potential. So the plasma itself is going to be at some other potential, which we will derive later on. And so the floating potential, we're going to call Vf, and the plasma potential we're going to call Vp here. And this difference in potentials is due to the fact that we have perturbed the plasma. We have stuck some probe into it, which has changed the nature of the plasma. It's created this sheath. It's caused all sorts of electrons and ions to fly around, charges to fly around. So there's no good reason for it to have the same potential as the plasma at all. OK. Any questions about this? OK. We're going to go on. So what we want to do now our probe has reached a floating potential is actually change that potential and measure what current we draw. We know that, for the floating potential, we get 0 current. But if we bias the probe in some way, we'll get different currents. Nigel, I see your hand. AUDIENCE: Yeah. I wanted to ask, what ground exactly are we referencing the potential measurements to? Is it just any random ground or is it anything specific? JACK HARE: I can give you a short answer, and I probably won't give you the longer answer. The short answer is, as you know, the potential is just relative. We're only interested in relative potentials. And it makes sense in this to use the potential of the plasma very far away as our reference, v equals 0. AUDIENCE: OK. JACK HARE: And that's what we're going to do here. But of course, mathematically, we could choose any other potential, and that's absolutely fine. Another useful ground if you have a probe stuck inside, for example, a laboratory plasma, another potential-- you could use as a reference, but we're not going to-- would be the potential that the vacuum vessel takes. And another potential you could use if you are a Langmuir probe mounted to a satellite flying through space will be the potential of the satellite itself. I'm just going to mute 218 because it's very sensitive microphones in there. Thank you very much. But please feel free to unmute if you want to ask a question. OK. Any other questions before we keep going? OK, good stuff. So as I was saying, we are now going to bias our probe, which is effectively changing the probe potential, V0, and measure the current that we draw. And depending on where we bias this probe, we're going to end up in different regions. So we're going to have here as an overview, a qualitative picture. And we're going to go and do the calculations for the quantitative picture later on. So our qualitative picture, we have an IV curve, where we have I vertically like this and V along here. We're going to identify our 0 current with the floating potential, as we've already done. We're going to say that somewhere above the floating potential is the plasma potential. And it turns out that we end up with an IV curve that looks like this. And there are a few important quantities on here. There's the electron saturation current. And I'll just draw this a little bit better down here for the ions because it showed some asymptote. There is the ion saturation current down here. And I'll explain where all of these things came from. I just want you to have this drawing in mind while I start explaining it. So one place we know what the current would be, or two places we know what the current would be, are at these specific potentials here. So at the plasma potential, the current is going to be entirely due to electrons which are streaming into the probe. Because if we're up at the plasma potential, we know that there will be no ions in general. So we're just going to end up with this electron saturation current, 1/4, times the electron charge times the probe area times the density times by the random electron velocity. We showed that earlier. So this is obviously quite a large current. Another place that we know, as I said, is at the floating point here, where the current is just going to be equal to 0. OK. But then there are three other regimes that I want to talk about, and I'm going to call them A, B, and C here. So let's start with A. So A is when we have our probe potential greater than the plasma potential here. We're going to attract all the electrons. We're going to repel all the ions. And so we're just going to end up with basically the same current. So I is going to be roughly equal to the electron saturation current. Now, in general, the current actually generally starts to rise. And it still increases with voltage weekly, for reasons that we're not really going to go into here. But it's quite difficult to get the mathematics for this correct. We don't tend to work in this regime. We don't tend to bias the probe such we're drawing the electron saturation current, and we'll talk a bit about that later on. And so we're not going to go into too much detail about why that happens. But that's why it's that way on the camera. There's also the regime B here. This is where V0 is less than the plasma potential but greater than the floating potential. And here, we start getting into a region where some electrons are repelled and some ions are attracted. And that's because the probe is charging up more negative, and so it wants to repel things here. We will go into trying to find an analytical formula for exactly what this curve looks like, but just qualitatively, this is why we have a decrease in the current here. And then finally, we have this regime, C. And in this regime here, we have a potential on the probe which is less than the floating potential. And we're going to repel all the electrons and we're going to gather all the ions. And we will then be drawing the ion saturation current, which is 1/4 ea ni ui bar. So this is just the counterpart for the electron saturation current. And of course, it's smaller by the square root of the mass ratio or something like that, probably. So it's a fair bit smaller than the electron saturation current here. Now, the problem with this picture, and the reason why we can't be more quantitative is at the moment, all of our solutions-- so ise and isi-- they're functions of things like ne, ue, and ni, ui. And they're functions of these at the probe surface. That's not particularly useful because what we'd like to do is relate these quantities to the parameters inside the plasma-- things like the density and the temperature and things like that. And so these measurements of the properties of the plasma right at the probe surface aren't particularly useful. We need to have some theory that links-- excuse me. We need a theory to link the probe quantities to the plasma quantities. And that theory is unfortunately sheath theory. So we're going to have a go at deriving quantitatively some of the quantities here that I've just sketched very qualitatively. Before we go on, please ask any questions. So who can tell me what the length scale for this sheath is? What's an important length scale in sheath theory? Yeah, just please shout it out. AUDIENCE: Debye length? JACK HARE: Yeah, Debye length. I never quite remember how to spell Debye. I may have put an extra E in there. Maybe this one doesn't exist. OK. So this is often written as lambda Debye. It's equal to, in SI at least, epsilon 0 te over e squared ne to the 1/2. It's an important length scale for sheaths. But as you probably know, it's not in, for example, MHD, or other simpler, fluid pictures of the plasma. It's just important to sometimes realize the limitations of some of the other models we use. So you can't model how a Langmuir probe works using MHD. It just doesn't work. And what is important about the Debye length? What length scale does it represent? What's a physical intuition we have for it? AUDIENCE: Distance over which voltages are shielded. JACK HARE: Yeah. I guess we can say within a few lambda Debye, the probe perturbation vanishes. So although we've stuck this probe in, and it's going to be at a very different potential to the plasma, the plasma won't see it over a few lambda Debye or so. And if we take some cold plasma-- so let's say it's got a temperature of about one electron volt. It's got a density of around 10 to the 17 per meter cubed. This will give us a Debye length on the order of 20 micrometers here. And so for any reasonable probe which we could construct which has some characteristic length scale here of A, this Debye length is going to be very, very, very small. So we can say that lambda Debye divided by A-- lambda Debye is much less than A. And this leads us to being able to do in most of our sheath theory what we call a quasi planar assumption. So we don't have to worry about the shape of the probe. We just worry about some infinite plane. And so we can treat this like a one-dimensional situation. You will see the sheath equations are complicated enough without doing it in 1D. So this is a nice simplification to be able to do. So the sheath equations to solve this properly-- well, you need to start doing things involving Gauss's law and the Poisson equation. And it turns out that when you start putting this properly, these equations are non-linear. You're going to need a Boltzmann factor inside here as well-- minus eV over Te and stuff like that. So these equations look very non-linear. They've got exponentials in. They've got gradients in, all sorts of stuff like that. And that means they're very complicated. They're still complicated with big assumptions. So one assumption that we almost always make in sheath theory is that the ions are cold. They've got a temperature of 0. And obviously, that may not be very well justified in your plasma. But it allows you to make a lot of progress. This actually is relatively well justified in the sorts of low temperature plasmas you tend to stick Langmuir probes in. But I'm sure that you can think of a plasma where the ion temperature is not 0 or isn't much less than the electron temperature. But what I'm saying is that even if we put in these very big simplifying assumptions, the sheath equations are still very complicated. And so what I'm not going to do is give you a complete derivation of the sheath equation. This is what I did last time I taught the course two years ago, and I've never seen a class lose interest more quickly. It was a complete disaster. So I'm going to give you a very high level overview. I'm going to tell you the results of it. And I understand that might be unsatisfactory. And so I encourage you to go look at Hutchinson's book or Lieberman and Lichtenberg and learn a little bit more about it if you want to. At the end of the day, I think these Langmuir probes are a tool. We should understand how they work in some limitations, but we just want to know what the answer is. So we're going to plow on a little bit. Any questions on any of this before we keep going? OK. So this is my one-page brief overview of sheath physics. What I want you to bear in mind is we are trying to get-- what we want out of this are the electron density and the electron velocity at the probe and the ion density and the ion velocity at the probe, in terms of the density far away from the probe, which we'll call as infinity, and things like the temperature of the plasma here. So this is what we're going after. So whenever I write down an equation, think to yourself, did that help or did it not help? OK. So I am going to draw a little sketch, which is going to take me a little while to get right here. On the right-hand side of our domain, we're going to have a wall, which is our quasi-planar probe here. And we're going to have a coordinate pointing from left to right, which is the x-coordinate here. And there are a few important places on this x-coordinate. There's infinity, which is very far away in the unperturbed plasma. There is a point that I'm going to label s here, and there's a point that I'm going to label 0. So 0 is the probe. This region between s and 0 is the sheath. This is the imaginatively named presheath. and this is the plasma. So again, what we want to do is take measurements at the probe and link them back to the properties in the bulk plasma without worrying about what this sheath is doing. But what the sheath is doing is very, very important, so we need to work that out first. So first of all, I'm going to sketch the voltage. And so this voltage is going to be flat in the plasma, and it's going to have a value of 0. And we are just defining the voltage in the plasma as 0 because that's our reference voltage here. The voltage is going to drop off ever so slightly in the presheath here. And I'm really exaggerating, but it's going to drop off a little bit in the presheath. And then it's going to drop down to whatever potential it is at the probe here, which is the bias point that we put our probe at. And we call that V0, and we call the potential at the edge of the sheath Vs V sheath. And I could write V infinity here, but again, we've decided that's going to be 0. So we're going to measure everything with respect to that. And you'll see in the picture I've drawn here, everything is negative with respect to that. OK. The other thing I can draw-- and you can imagine this is a separate axis down below here-- is the density. So out in the plasma, we have a density of ne, and that's equal to ni, and that's equal to n infinity. So I've invoked quasi neutrality inside my plasma here. There's no change in the potential, so there's no change in the electric field, and so there's no difference in the number of charge carriers. And I'm assuming that z equals 1 here. This is a hydrogenic plasma. Just for simplicity, of course, I can put z back in if you want to make life more complicated. Then in the presheath region, this is a little bit subtle. We are saying, yeah, the potential is changing, but it's not changing that much. It's quasi-neutral-ish. So the density is going to-- turns out it drops a little bit. But we're still going to say ne is roughly ni, but it's now no longer n infinity. Again, we can do that more rigorously if you look in the book. For now, we're just going to go with it. And then finally, this density splits so that we get the ion density and the electron density. And this is happening because in this region where the potential is changing very fast, we have strong electric fields. We no longer have quasi neutrality. We actually have electrons being repelled so there's fewer of them. And we have plenty of ions here. And so this picture is obviously applying in the region down where our probe is biased low enough that we're repelling electrons and we're gathering ions. And I'll justify that in a moment. So this sheath forms. Let me just think about this. So we only get a sheath when our V0 at the probe is less than minus Te over 2e. E So that's T subscript e. I'll try and write that a little bit more clearly. This is the first result that is completely non-obvious. I have not derived for this. This is just true. So we might think, where are we going to get some of these potentials in our system? What does our system have? Well, the only thing it's really got is a temperature. And we often measure the temperature in electron volts, so you might be thinking to yourself, a-ha. Maybe the voltages that are going to show up in this solution are going to do with the temperature of our plasma. And you're right, but the factor of 2, that's not at all obvious. It's just you can work it out. And so we only get a sheath forming when we bias the probe sufficiently negative, and that is because the voltage at the sheath-- so this Vs here is exactly minus Te over 2e. And so if you bias your probe above that potential, you don't get a sheath forming. All of this sheath physics falls apart here. And later on, we're going to show that Vf-- let me think how to write this properly. We're going to show that V0 minus Vf is about 3Te or so. So this means that our probe has to be biased somewhere around about Vf, or at least significantly lower than the plasma potential here. So we're going to be operating near the floating point. And there's other good reasons why we want to operate near that floating potential as well, but skipping ahead. That's just a preview there. So that's the first fact that we now know, is that our sheath is going to be at minus Te over 2e like that. The next thing we're going to say is that the ions are accelerated across the sheath. So although we know the potential at the sheath, the potential at the probe is still arbitrary, so this isn't particularly useful. But we can make use of the continuity equation to actually make some rather powerful arguments here. So continuity is going to say that the flux of ions which are hitting the probe, which is the quantity that we're measuring with our probe. So this is ni at 0, ui at 0. That is simply going to be equal to the flux of ions which is crossing the sheath boundary here. So ni like this and ui like this. This actually turns out to be very, very powerful because it means we don't need to know what the density and the velocity are at the probe itself. We just need to know what they are at the sheath. And the reason that's useful is because we can write the density at the sheath here as roughly equal to the electron density of the sheath. Remember, we're in this quasi-neutral regime down to here. And so at this point, we're saying, eh, the potential drop is small enough that it will cause a neutral. So we can still use quasi neutrality to write the ion density in terms of electron density. And the electrons we're going to treat with a Boltzmann factor here. So it's going to have the density that they would have out in the bulk plasma, but reduced by a factor of exponential e times the sheath over Te here. And if we plug the potential of the sheath in here, we'll get n at infinity, exponential of minus 1/2. And if you're wondering what exponential of minus 1/2 is, it's about 0.61. I'm going to leave it as exponential, as minus 1/2 in all of these calculations, just so you don't ask, where did 0.61 come from? But it is just 0.61, so it's about a half. I'm just going to keep pushing on with this and then I'll take questions, because some of them may become clear. The next thing we're going to do is conservation of energy. And we're going to assume that our ions start off with a velocity of 0. So let me rewrite that. I'll move this over here. I say that my ion velocity at infinity is equal to 0. So this is the cold ion approximation I talked about earlier. We're going to be accelerating these ions up. And so then we can say that the kinetic energy of the ions at the sheath-- so 1/2 mi ui of s squared is just going to be equal to the change in potential energy here, which is e times the sheath potential. Remember, we're referencing V at infinity is 0, so there should be a minus V at infinity here. But that disappears. Yeah, Nigel. I see your hand. AUDIENCE: Are those dashes bullet points or is that a minus 1/2? JACK HARE: Those are all bullet points. I apologize. AUDIENCE: OK. Thanks. JACK HARE: Yeah, no worries. It's a good question. Ta-da. Any remaining minuses are your problem. OK? Good. Any other questions while we're paused? OK. So from this, we can then infer the velocity at the sheath is going to be equal to the square root of Te over mI to the 1/2, where, again, I have just plugged in our sheath potential into here. OK. This is rather good. We're almost there. The final thing we might want to know is what is the velocity of the electrons at the probe surface? So this would be ne of 0, and that would be equal to n infinity exponential minus e V0 over Te. So that's going to depend. In this case, for the electrons, we're going to make it depend exactly on the probe potential for the ions we made use of this trick using the continuity equation to avoid actually knowing what their density was at the probe. Because it's clearly no longer equal to the electrons, and so we can't use the electron Boltzmann equation of state in order to get it. So we're going to put this together on the next slide, but I just want to point out right now, I think we have now written all of the quantities for the electrons and ions at the probe, in terms of quantities in the bulk of the plasma. So this is extremely powerful because we can now use that relation. And I saw a comment in the chat here, which I'm just going to read. We're assuming T is constant through the sheath. We are assuming that T is constant through the sheath. How is this apparent or valid? For example, keeping pressure constant. I believe the electric field will contribute to the pressure balance here, and so you don't have to worry about just the thermal pressure being involved. We've got electric fields involved here as well. OK. Any other questions? OK. So now if we have our probe-- so this is for V0 less than Te over 2e, which is the condition that all the sheath theory requires. And just remember, V0 here is V of the probe. Hutchinson starts using Vp here to mean the voltage of the probe, but then you might also think that's the voltage of the plasma, and I find that very confusing. So I'm writing it as V0 here. But we can now say that the current that we draw-- so I drawn is-- and we can split it up into contributions from the ion current. So this is just going to be equal to minus e times the area of the ions are passing through, ni of 0, ui of 0. And we can just replace that with ni of s, ui of s, using continuity. And we have that on the previous page. And I just want to point out, when we previously wrote down this equation, we had a factor of 1/4. That 1/4 has disappeared because, in fact, we have no random motion of our ions anymore. We set their initial temperature equal to 0. And so this is now all directed. So our geometric factor is not irrelevant because now all the ions are going in the same direction. So previously, it was 1/4 to account for random motion. Now it's just 1 because they're directed like this. And that means we can write down the ion current as minus the exponential of minus 1/2, which is just 0.61. The area times the electron charge times the electron ion density of infinity times the incoming velocity, which is Te over mi to the power of 1/2. I just want to make a quick point that I'm not going to labor too much. This area here is technically the area of the sheath. However, that is roughly the area of the probe with corrections on the order of lambda B over A. And we found that lambda tends to be very, very small. Lambda over A tends to be very, very small, so we don't really worry about those corrections. But if you end up in a regime where that's not true, you might want to worry about this a little bit more. So that's the ion current. Any questions on that? The electron current now is going to be a factor of 1/4 coming back because the electrons are randomly moving. Some of them are being repelled, but in general, we're just dealing with a random Maxwellian distribution function. And then we have these factors of e and A. And then we have the density at the probe and the velocity at the probe here. This A is genuinely the area of the probe because we're working with the x-coordinate set to 0 at the probe. We worked out what the density was of a Boltzmann factor. The thing that we didn't have is this. And this is just going to be Maxwellian. And so we use the standard result from a Maxwellian temperature here. So this all comes out as 1/4 times e n at infinity exponential of e V0 over Te. And then there's this factor of the Maxwellian, which is 2 times 2Te over pi me to the 1/2. So it's slightly ugly, but that's the average or the mean velocity in a Maxwellian distribution here. And then we can put this together. We can say that the total current that our probe is now drawing is equal to the ion current plus the electron current. And this is Hutchinson's equation. Gosh. I didn't update my notes, so I don't have it. But where is Hutchinson when you need him? I don't have it straight away, unfortunately here. My memory of it is it's equation 3.2.23. And perhaps someone could check that. AUDIENCE: I think it's 2.29. JACK HARE: 3.2.29? AUDIENCE: Yeah. JACK HARE: Thank you very much. I appreciate it. OK. And it's this long thing. So the reason I'm giving you the equation citation is if I make a mistake copying this down, you should go refer to Hutchinson instead. So what I've got is n infinity, electron charge, area of the probe, square root of electron temperature of the ion mass. And all of this is now times a component to do with the electrons, which is 1/2 times 2 mi over pi me to the 1/2 exponential e V0 over Te minus the ion contribution, which is exponential of minus 1/2. So this is e minus, and this is the ions here. So this looks a little bit complicated, but let's have a little think about it. This ratio, mi over me is large. That just depends on fundamental quantities. But this exponential here is going to be small because we know that V0 is less than Vs. So it's less than minus Te over 2e. So this quantity here is going to be less than 0.6. And although this is large, it's large like 40 or so. And we can imagine if we decrease the voltage on the probe even more, the small thing is going to be smaller than the large thing is large. And we can see how the electron current could be completely extinguished and we're left with just the ion current. Yes, Sean. AUDIENCE: In Hutchinson's version, he has the ion term multiplied by the sheath area times the probe area. Are you just saying that that's about 1? JACK HARE: Yeah, exactly. So he puts in this ratio. And like I said, it is important. But it's one of the things I'm glossing over in this class. But you're quite right. That's there. I just said, area of the sheath or the area of the probe equal to unity. AUDIENCE: Got it, thanks. JACK HARE: Cool. Any other questions? This is our big result. But we're now going to go use this to learn about what happens to Langmuir probes. Nigel? AUDIENCE: Have we discussed how you find the area of the sheath? Or is that just a theoretical? JACK HARE: So what I've said here is that the area of the sheath is basically the area of the probe. AUDIENCE: Yeah, I mean beyond that. JACK HARE: We haven't discussed that. There's a long discussion in Hutchinson about it. Effectively, there are corrections on the order of the Debye length to it, which may or may not be important. And it turns out if you bias the probe very negative, deep into the ion saturation region, the sheath actually continues to grow. And so you continue to gather more and more ion current. And that's one of the reasons it's actually very hard to measure the ion saturation current because it's not like a flat asymptote. It's just a slowly growing function. So if you ever sit down and work with Langmuir probe data, any of you, you'll find it's incredibly frustrating because all the stuff in the textbooks doesn't really work. And last time we did this, we did have some Langmuir probe data to analyze. But you know, I thought you guys would prefer to have full problem sets in 5, and so I cut it this time. If anyone's like, give me that Langmuir probe data, I'll happily send it across. You can have a play with it. It's from Dionysus, which is Kevin Waller's machine across the street in NW14 or whatever. OK. So let's just check that this long equation matches what we thought we were going to get. So if we put v zero much, much less than minus t over 2e, we're going to get no electron currents. And that's due to this exponential here is just going to go to 0. And so I is going to be roughly equal to the ion saturation current. And the ion saturation current, we can now see, is the density at infinity e times the area of the probe Te over mi to the 1/2 exponential minus 1/2. So that's cool because we know all of those parameters, apart from the density and the temperature. And those are things we want to measure. So we can't work out from the ion saturation current exactly the density and the temperature, but we know the density times the square root of the temperature. So maybe if we can find out the temperature from somewhere else, we can crack this whole thing open. I keep clicking on the wrong monitor because I've got basically the same thing shown on three monitors. So now I've moved on to my next set of slides, but before we go on to interpreting the probes and actually measuring density and temperature, does anyone have any further questions on this key result? All right. So now we're going to go on to interpretation of probes. So let's have a little drawing of our probe. It's going to have some tip like this. That tip is going to be surrounded by some insulator, and then that, in turn, is going to be surrounded by some sort of shield. So this is a cutaway here. It's a cutaway of coaxial geometry insulator and tip. And if I draw this face on, it looks like, as I said, coaxial something like this. So you can have a lot of fun making Langmuir probes. But they are fundamentally little bits of cut-off coax cable. And there's some reasons why you might want to use them made out of certain materials. So we tend to want to work near the floating potential. So we want to have the Vf-- our pro voltage be 0 close to Vf. And there's two reasons for that. The first of all, as we've seen, is that we actually have good theory here. We have the sheath theory. We don't really know what the theory looks like in other places. Maybe we can go work it out, maybe we couldn't. So it's nice to work near the floating potential. The other reason is that it limits currents. And actually, this is a big problem because if you draw a large current, you get large heating. Or let's say this avoids heating. Because if you're sticking your probe inside a plasma, it's already going to get hot. And if you start drawing a large amount of current, it's going to get even hotter. So you don't really want to work in the electron saturation current region where the current is huge. You want to work down where the current is basically 0 to avoid melting this. Even then, you're going to be making the tip of this out of something like tungsten, which has a very high melting point. Your insulator is going to be some custom ceramic, and your shield will probably be tungsten as well. So you have to design these things very, very carefully not to melt. Cool. So what we're going to do is we're going to sweep V0 near Vf. So you can imagine that we have control over the potential of our probe with some power supply, and we measure the current through that. So this little circuit is we've got our probe and our little plasma. And we've got some sort of resistor over which we measure the current, and we have some sort of bias that we're biasing the probe to. So this is I0, and this is V0 here. And we're going to be near the floating potential. And if we go back to the equation that we had back here, and we set the current equal to 0-- so if we say I equals 0, we can rearrange this. And we can find out that the electron charge times the floating potential, which are operating here over the electron temperature, is going to be equal to 1/2 times the natural logarithm of 2 pi me over mi minus 1, all in brackets. OK. So this is a vaguely complicated looking expression, but of course the only things that matter in here are the electron mass and the ion mass. And if you put those into something like hydrogen, you get about 3 here. And of course, it doesn't matter if it's not really hydrogen because we've got a natural logarithm here. So it changes slowly. The point here is that this says that your floating potential, Vf-- e times Vf, at least is about 3 temperature units. So if your temperature is one electron volt, then your floating potential is about 3 volts. That sounds super useful, but it's not. Why is it not useful? We've measured Vf by sweeping it there, and now from this, we've got the temperature. This is incredible. AUDIENCE: Yeah, but doesn't that just cancel itself back out in that equation? JACK HARE: I'm not sure it does that. Maybe you're thinking about this in the right way. The question would be Vf relative to what? We had this whole debate at the start. So if you measure Vf in your lab as 10 volts, but you don't know what V in the plasma is so you don't know whether the 10 volts that you're measuring-- what's that reference to? And that's reference to your building ground. But you need to have Vf reference to the plasma potential. And you don't know the plasma potential. So this is actually pretty useless. So without the plasma, we have no reference for Vf. This is useless. You could try and do it. If you remember, if I go all the way back in my notes to my little sketch here-- you could say, well, look. I can measure Vf. Down here is where the current goes to 0. And then I could measure the volts between Vf and Vp here, V plasma. And I'll define the plasma as when the electron saturation current happens-- so when this curve rolls over and starts to saturate, I'll call that V plasma. And the difference here, that'll be delta Vf. And that'll be 3 times the electron temperature. So you can do it crudely like that. The trouble is that as I've sketched it here, this doesn't really roll off. It actually just keeps going up like that. And if you look at some real data, it's usually even worse than that. So it's a super inaccurate way of measuring the temperature. But you could use this. It's fine. The other problem, of course, is you'd have to bias your probe into electron saturation region and risk it melting. So good reasons why you wouldn't want to do this technique, but maybe you want to if you're desperate. But don't worry, there's still better things we can do. So again, we could use Vp at I equals the electron saturation current, but that's inaccurate. So we don't tend to do that. What we do instead is an alternative technique where we look at this equation very hard and we think to ourselves, hm. How does I change with V0 here? So we can write down analytically dI dV0. And maybe you want to do this around about the floating potential here, because that's where we want to operate where we've got all our nice sheath theory, where we've got our nice equations. And we find out that what we get out is e over Te times I minus the ion saturation current here. Now, when you do this, you also get a little term that looks like-- how does the ion saturation current change with the probe potential? And this is due to the fact that the area of the sheath is not exactly equal to the area of the probe. But for our purposes, we're going to assume that is negligible. So we're just going to set that extra term equal to 0 here. And if you look at the first term, and you think about how to rearrange this, you can see that the electron temperature then can be written as e times I minus I sI, and that is divided by dI d0, like this. And so if we look again at our plot of our ID characteristic for this probe, where we have the floating potential here, and we have the plasma potential up here and the electron saturation and the iron saturation, we're working with the slope of the curve in this point here. And what we want to do is a two-step process where, first of all, we fit the natural logarithm of I minus I Is versus V0, and that is going to give us the electron temperature. So you can probably just stare at the equation for electron temperature and see where that comes from. And then once you've got the electron temperature, we can measure the ion saturation as well. And we know that the ion saturation current is equal to exponential of minus 1/2 e A0 n infinity Te over mi to the 1/2. We've just measured Te from this, so now we can measure density. So we need to measure the slope here, and we need to measure the ion saturation current. And we can do that just by sweeping V0 in this narrow range here from the ion saturation current up to just above the floating potential. And we never draw very much current because the ion saturation current is very, very low. And that gives us out the temperature and the density, which is pretty remarkable for something that we've just stuck in the plasma that's just a little rod of metal. OK. So that's how you use a single Langmuir probe. Any questions on that? AUDIENCE: Do we have a name for the voltage at which the ion current saturates? JACK HARE: No, we don't. AUDIENCE: OK. JACK HARE: Yeah. I'm trying to think what it is. Effectively, we want to have this exponent be very small. So you could write something to do with this being less than 0.61, and it has to be-- I don't know. Let's say it's 10 times less. And then you could define some voltage at which it's 10 times less, and we could call that the ion saturation potential. But I haven't seen that in the literature. But you could work out what it needs to be. So you could say, if I think I know what the temperature is, I know how much I have to bias my probe negative. The other thing is just to bias your probe negative until it starts to roughly asymptote. But as I said before, it never actually asymptotes. What happens is it just keeps going down. So it's a real pain in the ass to measure this properly. OK, good. Daniel, I saw your hand first. AUDIENCE: Yeah. So I thought I had an idea of this until a moment ago, and then realized maybe I don't. But where's the return current in this? Because you can put-- my intuition is you put a voltage on this and it'll do stuff initially. But if you wait long enough, it'll settle to some other charge where you've just charged the plasma slightly. JACK HARE: Yeah. So the plasma is going to be drawing electrons back off from the vacuum chamber or from the vast infinity of the universe. AUDIENCE: OK. So it is from just other places where electrons can come from. JACK HARE: Now, you've got to remember that all the way around the vacuum chamber, there is a sheath. And that sheath obviously has balanced and electron currents. But the area there is so huge in the vacuum chamber that if this probe is locally drawing a few more electrons then the rest of the vacuum chamber can just push-- slightly fewer electrons, and it'll all work itself out. And as we know, quasi neutrality is pretty strictly enforced by plasma because the electric fields get big. So it's no trouble at all to get those electrons back from somewhere else, yeah. AUDIENCE: OK, cool. That makes sense. JACK HARE: Good question, yeah. And Nigel, I see your hand. AUDIENCE: So the practice of actually doing this, besides just sweeping the ion saturation current regime near the floating potential-- you also need to then sweep positive enough to figure out what the plasma potential is? Is that correct? JACK HARE: Oh, no, sorry. With the previous technique, we were talking about if you just want the temperature, then you would have to get the plasma potential. The beauty of this technique is you don't have to go anywhere close to the plasma potential. You're just sweeping. You just need the slope at this point. So you need di, d, and d0 at V0 equals Vf. And so of course, to determine that slope, you just need to sweep far enough that you can fit a straight line and you're happy with that. AUDIENCE: And so the benefit of this, like we were saying, is that we don't need to reach high currents, which could screw with theory? JACK HARE: Yeah. So that's one thing. And the other thing is the theory-- yeah, so actually, sorry. You said it there. We don't want to reach high currents where our theory is invalid or high potentials where our theory is invalid. And we don't want to reach the high currents as high potentials, where we draw lots of current and melt our probe. AUDIENCE: Got it. Thank you. JACK HARE: The melting of the probe thing probably sounds a bit funny, but this is genuinely a huge problem, which is why I keep saying it. So we must operate at the lowest currents possible. AUDIENCE: Oh, no. I work with bias probes. JACK HARE: OK, great. Cool. Nice. Any other questions on this before we go on to some more advanced Langmuir probes? AUDIENCE: Yeah. JACK HARE: Go for it. AUDIENCE: Sure. So how do you measure the ion saturation current if it keeps on going down, like you said? JACK HARE: Yeah. So it drops for a bit-- sorry, you can't see my mouse on my screen. It drops for a bit, and then levels off. And so you just eyeball it as like, it's here. Maybe you would fit some sort of curve. But it's not easy to do, so that gives you a lot of error. And that error won't affect your interpretation of Te, but it will affect your interpretation of the density. AUDIENCE: Why wouldn't it affect your interpretation of Te if it depends on Iis ? JACK HARE: Oh, you're quite right. Yes, you're quite right. I guess it's inside a logarithm, so it's less important there. But it will still have some effect. You're quite right, yeah. I just want to point out-- once when a student did this exercise, they forgot to subtract the ion saturation current. And this doesn't really work properly there. So this is a common pitfall. Make sure when you're doing this, you subtract the ion saturation current from your current curve. So it should look like that, right? The thing that you're working with that you're trying to find. It doesn't matter for the slope, of course-- or it does matter a little bit for the slope, so yeah. OK. We're going to move on. How fast are these mechanisms? Oh, thank you. My notes didn't update, and I'd actually made a new note on this that I didn't want to say. So let's talk about limitations. So one limitation is you might draw I greater than the ion saturation current. So you don't know where Vf is to start with. So when you put your probe in and you start adjusting the voltage, you might accidentally draw too much potential and melt the probe. And we'll look at some other designs which are safe and don't ever do that. And so this is a disadvantage of this design. The other disadvantage is the sweep time. So we're talking about literally sweeping the potential. I wish I could draw straight lines. There we go. So if we've got time here, and we've got voltage here, and this is the floating potential, maybe we have a sweep pattern that looks like this. But it takes some time, delta t, to sweep. And that might be limited by how quickly your voltage source can sweep voltage. But it might also be limited by how quickly your digitizing can digitize data, because effectively, as you do this sweep, you're digitizing this curve here. And you want to have enough data points on that to do all of your fitting. And so you are limited. You're very much temporarily limited. If there's an event which takes place on a time scale that's faster than this, you won't be able to resolve that. And in fact, it will do horrible things to your data analysis. So you want to make sure that your plasma is not fluctuating on rapid time scales. Yeah, Daniel. I see your hand. AUDIENCE: Yeah. Do you also run into situations where the actual response of the plasma is ever a limiting factor for your measurement time there? JACK HARE: So response being this whole setting up the sheath and things like that? AUDIENCE: Yeah. JACK HARE: Yeah. I think that happens really quickly. I think that's going to happen on electron time scales over a Debye length. And so those are probably very small time scales. So I would guess that delta t of the perturbation is going to be on the order of lambda d over VTe. That's the sort of time scale I can think of. And that seems like a really short time scale. In fact, I know what that time scale is. It is 1 over the electron plasma frequency. And so that's really fast. So maybe in some plasmas, you could reach that timescale. But probably that's all going to happen very quickly. So I think from the point of view of the plasma, it responds instantaneously to the change in the potential on your probe, and we don't have to worry about that. But if you have a really funky plasma, that could be problematic. AUDIENCE: OK, cool. JACK HARE: I think omega p is probably the fastest timescale-- one of the fastest, probably the fastest timescale we see in most plasmas. Did I see another hand? OK. I'm going to keep moving because I've got a few more things to get through in the next 20 minutes or so. So let's power on. So the next probe we can consider is called the double probe. The previous one, we're going to retroactively rename the single probe. And the double probe-- once again, we've got some sort of plasma. But now we have two probes sticking into the plasma. And the clever thing is that the two probes are attached together, and they are biased with respect to one another by a bias potential, VB. And again, we measure the current. This is just like a resistor. So I can measure the voltage across this resistor, and I'm going to get out the current here. And that goes to an oscilloscope. Now, the cool thing about this setup is that if probe 1 draws a current I1, then probe 2 is going to have to discharge a current, I2. And from Kirchhoff's law, we know straight away that I1 plus I2 is equal to 0. So any current that comes in one probe has to be ejected out the other probe. There's no other place for it to go. This probably addresses Daniel's question about the plasma accidentally charging up. It turns out-- and Hutchinson talks about this in great depth in his book-- that almost every probe is really a double probe. It's just that either you have a double probe or you have a single probe, and the entire vacuum vessel as the other probe surfaces. And I'm not going to go into that in much detail, but that's a reasonable thing to think about. Yeah, I see a question from Nigel. AUDIENCE: OK. This might be a little out there. But effectively, because they're in a plasma, there is a pseudo resistor that we could draw on a circuit diagram between the two tips of the probe. Is there a problem with that creating a closed loop now? And that magnetic flux could go through and then make Kirchhoff's law not entirely valid. JACK HARE: What do you mean by magnetic flux? Because we haven't really talked about magnetic fields very much. AUDIENCE: I guess I've been assuming that these are on a tokamak this whole time. JACK HARE: So is your question, is there a problem if there's a current flowing inside the plasma here? AUDIENCE: No, it's more like if those are connected and there's a resistance between them, if we're just looking at this like a circuit, we've created a closed loop. And so now if you get a change in magnetic flux through the loop closed by that, you could induce an EMF that could affect-- JACK HARE: Yeah, OK. I see what you're saying. I see what you're saying. We're not actually measuring the potential on the probes, though. So we're measuring the current through the probes through this resistor. So if we induce a voltage around the whole loop, I don't know whether we would see that show up across this resistor. That's a really interesting question. I'm going to say that I don't know the answer, and I haven't thought about it much. So I'll certainly have to think about it and get back to you if I have any interesting thoughts on it. But I get what you're saying. You're saying, potentially, this could have some sort of plasma current channel here. And then that looks like a little V dot probe that we've stuck in the plasma, and something could induce-- yeah, I see what you're saying. Cool. I'll have a think about it. Cool. OK. And so there's other things that we can say about the currents here. So we know that the current going into-- excuse me-- probe 1 is going to be equal to the ion saturation current plus the electron current modified by a factor e V1 over Te. So that's just our Boltzmann factor. And the current through probe 2 looks very similar, except that we modify this now by e V2 over Te, like that. So we haven't used Kirchhoff's law yet. This is just saying if we had two random probes, this is the current that we would draw. Now we use Kirchhoff's law here, and we note the nice thing about this is that the current through any probe is now strictly less than the ion saturation current here. So we can't accidentally draw the electron saturation current and melt probe. So this is a definite advantage of this system. I'll do this a little bit more mathematically in a moment if you're not convinced yet. So then what we do is we continue and we say, we've got these equations separately for I1 and I2. We've got this equation that links I1 and I2 together. And we're also going to operate this so that the probes are floating near the floating potential. In fact, that will happen as soon as we put these probes into the plasma. We're not biasing the probes themselves. We're biasing them with respect to each other so the whole setup will float towards the floating potential. And that means that our ion current is going to be equal to our electron current, like this. Then we can put all of this together, and we can say that I1 is equal to Isi 1 minus exponential of e V1 over Te. So that's just using this fact here. And then that is going to be equal to minus I2 from Kirchhoff's law. And so that's going to be equal to minus Isi 1 minus exponential of e V2 over Te. And then we're going to say that this bias voltage that we're applying is simply, by definition, the difference between V1 and V2. And if we do a little bit of magic on this-- and this is actually an exercise in Hutchinson's book, which I had to do to try and work out where on Earth this came from. You find out that the current that you're measuring, which is the current flowing into one probe and out the other probe, is going to be equal to the ion saturation current times by the hyperbolic tangent of e times the bias voltage over 2Te, like that. And so now if I sketch the IV curve for this, this is B bias and this is I. I end up with something that looks like-- if I could draw straight lines. Give me a moment. OK. And so this is asymptoting at the ion saturation current, or at minus the ion saturation current here. So this is sort of the proof of what I said earlier, that we can't draw more than the ion saturation current here. And this is very nice and symmetric. And I'll show you in a moment how we use that to actually measure all the properties of the plasma. So I'm happy to take questions on this, but if you're like, I don't understand the derivation, the reason is I didn't really do the derivation. There's lots of steps missing here. I'm just giving you a couple of intermediate steps. And if you want to go get it, you should go work through the exercise in Hutchinson's book. I'm going to try and write this on this page because I think it'll be more useful. If we have for small bias voltages-- so we're operating just in this little regime here. We find similarly to the probe that we had before that di d the bias around B bias equals 0 is e times Isi over 2Te. And we're also going to be able to measure Isi by sweeping just a little bit larger, like this. And so we'll get, again, Isi is equal to whatever our equation we had before, which has a proportionality to ne and Te to the 1/2. So from this probe characteristic here, we can now get out both the density and the temperature-- again, from looking at the gradient near the bias voltage of 0 and looking at the ion saturation current as we did before. But the advantage of this is that we can no longer draw a large current by accident. But the limitation is still the sweep time. So we still need to sweep this bias voltage up and down so that we can trace out this tank function so that we can fit it. So you still can't resolve very fast moving things, but this at least is a nice and safe probe. So any questions on this? So no prizes for anyone who guessed after the double probe we go to the triple probe. I promise I will stop after this. So a triple probe is simply a double probe whilst a floating probe. So let me sketch that and explain what that means. That's meant to read floating. So we have our double probe as before, like this. And we bias it with some bias potential. And we measure the current across a resistor. But we also have-- stuck somewhere nearby, we have our single probe. So that is by floating. There we go. OK. Sorry. For this probe, we actually don't measure the current through it because we're going to leave it floating. So we actually measure the voltage that it's at here. So for the double probe, we set our bias voltage to be greater than a few electron temperatures. And maybe this takes a bit of trial and error, but we find it in the end. And then we find out that one of the probes is going to draw positive ion saturation current. Second probe is going to draw a negative ion saturation current because it's at a potential greater than a floating potential. This one is at a potential less than the floating potential. And finally, the floating probe is going to allow us to measure the floating potential itself directly. And the reason that this is neat and fundamentally different from the other probes that we've looked at is if I redraw our IV characteristic-- so I and V. And again, we have a floating potential. We have a plasma potential here. We have the ion saturation current and we have the electron saturation current. These three probes are actually representing three different places on this curve. So probe number one is representing a place down here, probe number three is representing this point, and probe number two is representing the opposite point up here, where we're drawing minus the ion saturation current here. And this means we're measuring three points on an exponential curve. And three points on an exponential curve is enough to specify it. And so we can just fit the IV curve, and we can get out straight away what the density is and what the temperature is. And we can do that without sweeping any voltage here. Remember, we are not sweeping the bias voltage here. We've just set it to a few times eTe. So the whole system floats near the floating potential. So we can't draw the electron saturation current by design. We also have fast time resolution, because now we're no longer sweeping anything but just digitizing this in time. So we can resolve any short-lived phenomena. But the only problem with this is that it assumes implicitly that your IV curve model is good. And I think that is a big assumption. So you've only got three points on there. So if it starts deviating from exponential, you're effectively not overfitting, but precisely fitting it. And so you won't be able to see any deviations. And that's when people start using quadruple probes, and all sorts of exciting things like that, which we're not going to go into. But you can keep adding probes. And one way to think about adding probes is that you keep adding more points along this line by biasing probes in different places. And if you measure their current and their potential, you can reproduce that curve instantaneously. Of course, if you try and crowd too many probes together in the same place, you're going to cause lots of perturbations and the probes are going to interfere with each other. So this is not flawless, but people do all sorts of clever things with this. OK. So questions? And I see Nigel. AUDIENCE: Yeah, I wanted to actually ask about the last point you brought up about-- could you talk more about the trade-off between having the probes close together so you ensure that they're measuring the same plasma versus them interfering with each other like they're sheets overlapping and things like that? JACK HARE: Yeah. It's fair enough. So I can talk a little bit about it, but I'm not an expert. So one thing I'll point out is our sheath thickness is going to be a few Debye lengths. So you might think, well, that's OK. I can keep my probes more than 60 microns apart or something like that. The trouble is that there's actually a region here called the presheath, and although we tried our absolute hardest not to model the presheath very much, if the presheath starts to overlap, it can be a big deal. And the presheath is much larger than the Debye length. It's maybe hundreds of Debye lengths. In fact, it's all a bit subtle. And if you start looking at Hutchinson's book, you realize that I haven't mentioned collisions or magnetic fields at all, and these make this extremely subtle and complicated to work out. So you really want to have your probe spaced far enough apart the presheaths don't interfere. So if I was doing this in an experiment, I would have one probe, and then I'd put another probe in next to it and see if my signal on the first probe changed. And if it changed significantly, by 10% or more, I would say those probes are too close together. And so you might have to do this a little bit quasi experimentally in order to work out how close you can put all the probes. AUDIENCE: Thank you. JACK HARE: You're welcome. Any other questions on this? I have just got two bullet points before we finish up. And these are just due to what we would call added complications. So the first complication are magnetic fields. And you definitely might want to use a Langmuir probe in a magnetic field. You might want to put it in the scrape off layer of a tokamak or something like that. And the big problem with magnetic fields is they alter particle orbits. If you have very strong magnetic fields, this may not seem like too big of a deal. Here's my probe here. I've got a magnetic field that's oriented like this. And I'll say, OK, well, all my particles are going to gyrate around this magnetic field. And so what you end up with is a sheath that's very close in like this, but you end up with a presheath that is effectively swept out by the projection of the probe area along this magnetic field. And you have particles which enter the presheath and then travel down the magnetic fields until they hit our probe. So I've probably drawn too many lines, but this is the rough picture here. So this is the presheath, this is the sheath, and this is the probe. It turns out that if you assume your plasma is completely collisionless, this all falls apart because you need some process to knock the particles from being outside the presheath to inside the presheath to keep fueling the particle flux onto it. And so even if the collisionality is very, very weak, this presheath keeps expanding until it reaches a length scale where the collisionanilty is important. So the exact size of the presheath is very hard to calculate. And this is only in the case that I've sketched here for very strong magnetic fields. Of course, if your magnetic fields are weaker, your particles are going to be doing things like this, where they spend some of their time inside the presheath and some of the time outside. And then you have to start doing single particle orbit calculations to see how many particles hit your probe, and it becomes very complicated. So probes in magnetic fields are very hard, and people spend their whole careers trying to work on that. The other thing we haven't discussed is collisions here. So in the sheath theory that I presented, I didn't derive it, as I said. But I had assumed that there were no collisions here so all the particles would just be streaming inwards without interacting. In a real plasma, especially the sorts of temperatures the plasma is at when you tend to work with Langmuir probes, collisions are actually quite important. And these collisions are going to modify your flux, which, if you remember, was the first thing we wrote down-- gamma. And they're going to modify it to include particle diffusion here. And so you need to go back and rederive all these results with this modified gamma in this case. And again, Hutchinson spends a very long time on this in the book because this is something that he cares about deeply. And if you're working with Langmuir probes, then one very good resource is to go look at his book. And then Lieberman and Lichtenberg also deal with magnetized sheaths and collisional sheaths in a great deal of detail. So there's more information there. So I'm very happy to take questions. But I actually have to go teach another class virtually in 4 minutes, so please keep them brief. AUDIENCE: Are there games to play when you sample? Instead of doing a perfect sweep, if you're clever about trying to change the frequency with which you do that, you can get a little bit better idea of how consistent your response is in time or something? Make sure you're not having sampling effect. JACK HARE: I can imagine you could do all sorts of clever things with some fast sweeps and some slow sweeps and not sweeping the whole range, if you only need a few points to fit the characteristic. So yeah, I imagine there are some things that you can do in order to optimize that. Any other questions? Cool. Well, thank you very much, everyone. This is likely to be online on Thursday as well, as I won't be able to stop self-isolating by then. And we'll be talking about refractive index diagnostics. So have a good day, and I'll see some of you again in a couple of minutes at the next class. So bye for now.
|
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
|
Lecture_7_Refractive_Index_Diagnostics_III.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So maybe we'll start with the Schlieren photography stuff. You're thinking to yourself, this is interesting but it doesn't really apply to my research. I can do tokamaks, I can do low temperature plasmas. We don't really have high enough density to use visible radiation like lasers, and so we're not really going to be able to see things like shocks, even if there were shocks, and there probably aren't shocks inside your tokamak plasma. Everything is nicely subsonic. So that's OK. Maybe you tuned out a little bit. But today we have a topic, which is applicable, really, to every form of plasma, which is interferometry. Because we can do interferometry all the way from very, very low densities all the way up to-- it is possible, or at least applicable to very high densities, as long as we have some probing radiation which is sufficient to penetrate your plasma. So this is interferometry. And I spend an awful lot of my time thinking about interferograms and so I really like this stuff. And hopefully, we'll have a good time learning about it as well. So the basic principle of interferometry is that previously, for the Schlierenography. Or the Schlieren, we were interested in gradients in the refractive index, which were, of course, gradients in the electron density. For interferometry, we're interested in refractive indices which are varying in some way. So they're not just equal to one, which is the refractive index of the vacuum. So with interferometry, we're looking at this refractive index capital N again, which, as we've written a couple of times in a cold high frequency unmagnetized plasma, looks like one minus omega squared over omega p squared. And then we wrote that as square root of 1 minus the density over the critical density. And I promised I would define the critical density because I forgot to do it in one of the earlier lectures. So here's critical density. It's defined as epsilon 0 me over e squared times omega squared. So you can see there's a different critical density for every probing frequency and the rest of it just depends on these fundamental constants. So that doesn't change at all here. And one of the things we want is to be able to recreate this refractive index so we can work out how-- it relates to the frequency and the k vector of our wave, because those are the things that go into the phase of our wave. So just here, I'm just going to write that N is actually defined as the speed of light over the phase velocity of the wave, which is equal to the k vector of the wave and the speed of light over the frequency of the wave. So we're going to be using the link between these two here, which means that if we can measure something as a k vector, or the frequency of the wave, we can measure something to do with the electron density side of the plasma. And the setup that we'll have, we'll have our plasma like this. And maybe it's got some length, capital L like this. We'll put our probing radiation through the plasma and it's got some electron density associated with it as well. And that electron density can vary. It can just be a function of position or of time. We might be interested in measuring the variation in space or in time here. And we call this beam that's gone through the plasma the probe beam. And so far, this looks an awful lot like Schlierenography. The trick is we also put a beam that does not go through the plasma, it goes around it. It can go through the same vacuum chamber in a region where there is no plasma, it can go around the vacuum chamber, doesn't really matter. And we'll call this beam the reference beam, or ref for short here. And what we're interested in knowing is what phase the probe beam and the reference beam acquire. So the probe is going to acquire a phase by probe. Now remember, the phase is the thing, when we have our oscillating electric and magnetic fields, we have this exponential i of a.x minus omega t, like this, adding this quantity here, which is the phase. So if we want to work this out and we just want to know what it is at an instant in time, we can drop the time variation here. And so this phase that this probe beam picks up is going to be equal to the integral of k.dl, where dl is some infinitesimal distance along the probing line of sight. And that k.dl is going to be equal to using this definition here, the integral of N omega upon c dl, like that. On the reference beam, the situation is very similar. So we have bireference. The reference beam is going through a medium where the refractive index is just unity, and so we can just write this as the integral is omega 1 c dl. And that means that there's a phase shift between these two quantities delta phi, which is equal to the phase of the beam going through the plasma minus the phase of the beam going around the plasma. And that is going to be equal to the integral of n minus 1 omega upon c dl, like this. Now the electromagnetic radiation, omega is going to be constant in this system. The frequency of the wave doesn't change as it goes through the plasma. The speed of light is constant. So really, this is just an integral over n minus 1 here. And using our Taylor expansion here, we assume that Ne is much, much less than Ne critical, we can get out an expression here where delta phi is roughly minus omega over 2 c and critical integral of Ne dl. So we now have a phase which is linearly proportional to this quantity, the line integrated electron density. So say, for example, our dl was roughly in the z-direction, we would have Ne dl as a function of x and y, for example. And there's probably still some time variation inside this as well. This could be a function of time as well. So we could be trying to probe how this phase changes in time or how this phase changes in space, and we'll look at both types of interferometers. So the goal, clearly now, is to try and measure this phase so that we can get out the electron density. Any questions so far? Yes. AUDIENCE: So in the lines where you're defining capital N as the square root of 1 minus omega on omega p squared, and then right below you say that N is important to k, c, and omega. K is a vector, and so N is also a vector. How do we get the vectorial information from that first line defining N? JACK HARE: So the question was this k here is a vector and so this N should be a vector. And so where have I lost the vector nature of it? N definitely is a tensor, I think, in reality. So I've not even started that. That complicated even further, where does the tensor nature of it go away. I think at this level, I'm just going to say that this plasma is isotropic and so it doesn't matter. When we bring in the magnetic field later on, we'll find out that N isn't isotropic anymore and you might have to think more carefully about the direction your probe is going in. It's going to have something to do with the direction of k here. So I just want to point out that I'm actually deriving all of this, at least in many places just using capital N, but if that means that later on, you want to go and work out what-- this is for a magnetized plasma or tokamak, where this really matters whether you're doing x-mode, or o-mode, all that good stuff. You can go back and rederive this. I just want to try and keep this as simple as possible to start with. And I don't know exactly where we're sort of losing the vector nature. So I'm being a bit wishy-washy here. Any other questions? Any questions from online? AUDIENCE: Yeah. Maybe this is what was just asked, but couldn't-- depending on the type of wave, couldn't the like omega change when it goes in the plasma? JACK HARE: So the frequency of the wave doesn't change, but the wave number does, the wavelength does. But the frequency of a wave won't change as it propagates through a plasma. But this omega doesn't work. Sorry, this k definitely works the wave number. So the wavelength of the plasma, the frequency of the irradiation doesn't change. Any other questions? OK. And so we'll talk about a few different schematics for actually interfering this reference and probe beam in a moment. But for now, just imagine that we've somehow managed to get the reference and the probe into the same place. And so these two beams of radiation are interfering. And then calculate what we're going to see on our detector. And our detector could be some sort of diode that resolves the signal in time, or it could be some sort of camera that resolves a signal in space. It doesn't really matter for our purposes here. So we're going to have an electric field due to the probe beam, which is equal to whatever the strength of the probe beam is times this exponential of ik.x minus omega t plus delta phi. So this is the key bit. This wave is now ahead or behind in phase by an amount, delta phi, and that delta phi is this quantity here that we're trying to measure. The reference beam is much simpler. It just, again, has a strength of the reference beam exponential of ik.x minus omega t. There's no additional term here. Where we actually start the 0 of the phase is arbitrary, and I've chosen to put the 0 of the phase such that all of the phase shift is in the probe beam. I can do the same thing [INAUDIBLE].. It doesn't matter now. And then we will have a total electric field that's hitting the detector, which is equal to the probe electric field plus the reference electric field. That is going to be equal to the strength of the probe beam times the exponential of i delta pi plus the strength of the reference beam. And all of this it's going to be timed by our old friend, exponential of ik.x minus omega t, like that. Now we don't sense electric fields with our detector because these electric fields are probably oscillating far too fast for our detector to see. So even if you're using visible light, it's very hard to detect if it's fast enough. You see that? Even if you're going down into your gigahertz, even 10 gigahertz detectors are expensive. So we don't resolve the actual oscillations here. What we resolve is a time average power hitting our detector. And so we're averaging over some cycle. And so this intensity is equal to these brackets here, which are going to indicate time averaging and then the total electric field times the complex conjugate of the total electric field. That's going to give us a real value here, which is proportional to the square of the absolute value here. That, when you work through all of this, you get out the strength of the probe beam squared plus the strength of the reference beam squared, all of that times 1 plus-- I'm going to run out of space. It's going to squash [INAUDIBLE].. Strength of probe squared plus strength of reference squared times 1 plus 2 strength of probe strength of reference over sum of their squares, all of this times cosine of delta phi, like that. And during this averaging process, we've effectively averaged this operating function times its complex conjugate. And when we average over multiple periods, this goes to 1, which is why we don't see this anymore. So when we're measuring the signal here, what we're seeing is the signal depends on delta. It doesn't depend on any of these little details because these are all fluctuating far too fast for us to see. So this is a function which has intensity in it. And that intensity is linked to delta phi here. Anyone know-- if I want to measure delta phi very nicely, I clearly want to maximize this term with respect to this term. Do you know what values of the probe field strength and the reference field strength I need in order to maximize that? AUDIENCE: Do you want it to be equal? JACK HARE: Yeah. So if I set this equal to this, then I will maximize this. And then, in fact, I can write this as i over i0, identifying this as i0 here. And this will be equal to 1 plus the cosine of delta phi, like that. And so this is quite neat. It's actually relatively easy to balance these. You can just put some attenuators in until they're about the right strength. Give them the super sensitive function, so if you don't get it perfectly right, you'll still get nice balance. But this function clearly goes between 0 and 2, like that. So you can have constructive interference, where you have twice as much intensity as you had initially, and destructive interference where you have half as much intensity. And you come across presumably constructive and destructive interference. Any questions on that? Yes. AUDIENCE: Maybe I'm asking this question prematurely, but it seems like a major limitation of this is that you have some cyclic pattern where a phase offset of 0-- JACK HARE: Yeah, we're going-- yeah. So the fact we can only measure delta pi modulo 2 pi is a problem and we're about to address it. But yeah, it's a great question. So I see a question online. Jacob? AUDIENCE: Hi, yeah. Does the length of the time you send out the probe matter? Because if you send the probe for too short of a time, then are you not going to be able to average it over a certain-- is there a certain amount of time you have to average it over, or does it-- JACK HARE: I mean, you want to average it over a few cycles. Really strictly speaking, you only need to average it over one cycle, 1 over omega. But in reality, it's actually really hard to make a probe beam that's that short. And so I think people who do femtosecond lasers, and femtosecond interferometry might begin to have to worry about that, but most of us don't have to worry about that. Yeah, so in this case here, this is just a statement that your detector can't possibly catch up and so it is going to average anyway. Except for if you're doing very low frequency radio interferometry where you could actually track the waves independently. Yeah. Other questions? I saw one over there. So this is your result. I now just want to take a little detour and show you some ways in which you would set up this so that we can measure delta phi and then we'll talk about some of the limitations and ways to overcome those limitations. So again, our goal is to measure delta phi, the phase shift within the [INAUDIBLE]. So one configuration that we can use which is quite popular is something called a Mach-Zehnder interferometer. So we have our plasma here. We send our probe beam towards it. We put a beam splitter in, beam splitter. And we send half the beam through the plasma and half the beam around the plasma. I always have to remember to get the beam splitters the right orientation by the very stupid beam splitter. And then this goes through to our detector here. So the nice thing about this setup-- I should give the name for it, sorry. It's a Mach-Zehnder. The nice thing about a Mach-Zehnder is you set it up so that in the absence of a plasma, this length, l probe, is equal to this length l reference. That's pretty useful if you're using pulsed radiation. If these two parts are different lengths and the pulses arrive at different times at the detector, you won't get interference. So you want them to arrive at the same time, so you'd like to have it balanced here. It's also-- it's a nice setup here because it's very flexible. These are very easy to set up. You don't have to have this beautiful square configuration. If, for some reason, your lab is a weird shape and you have to do like that, you can do something like that, as long as you keep the lengths about the same. So that's quite nice. The trouble with these things is that you do need to get the alignment very good. I'm putting a negative sign here because this is a negative of this set up. When you're positioning all of these mirrors, you need to get them within what's called the coherence length of your laser beam. And that coherence length for a laser could be maybe a centimeter or so if it's like a nanosecond pulse. But if you're dealing with picosecond or femtosecond pulses, that distance could be only microns, and then you've got to get the mirrors in the right place within a micron. That's very, very challenging. So this is a problem with alignment and coherence for your laser. So they're easy to set up, but they may be very tricky, depending on the pulse length that you're working with. The next one that you have probably heard about in other contexts in physics is the Michelson. So a Michelson interferometer has a beam coming in and it's got a beam splitter. It's got a beam going up and we put a plasma in one arm of this beam. And the beam goes up and then it bounces back down. At the same time, because this is a beam splitter, some of the light is transmitted through, reflects off another mirror, comes back, reflects off, and these two go down to your detector. This is a Michelson, of the famous Michelson Morely Experiment. This is very sensitive. The reason it's so sensitive is we have a double pass here. So if your plasma density isn't very high, then you can increase your signal delta phi by going through twice. That just comes about, because when you evaluate this dl layer, you're obviously doing the integral through the plasma and then back through the plasma the other way. So this enhances your sensitivity. [INAUDIBLE] But of course, there is a problem here that you don't want your plasma to change while you're doing this batch. Your plasma dynamics are such that the density changes, as you bounce through and come back on the other way, then you'll be measuring, effectively, a different plasma. And so this approximation of the density as constant as you go through in time won't be very good. So you do have some issues from time resolution. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah? AUDIENCE: Does that really come up that much in practice, though? Because you're talking transit time over reasonable size experiment. It's got to be nanoseconds at longest. JACK HARE: I do nanosecond experiments, yeah. AUDIENCE: OK. JACK HARE: But it might not be a problem for the experiment you're thinking of. So if you're thinking of a tokamak or something, we don't tend to look at dynamics on nanosecond timescales there. But they might be a problem for some people. So it's worth thinking about. Yeah. Any other questions on these two as we pause for a second? I've got one third one to show you. Yes? AUDIENCE: With the one on the left, [INAUDIBLE].. JACK HARE: Mach-Zehnder? AUDIENCE: Yes. You said that you just need the [INAUDIBLE] to be reasonably close together, but it means high frequencies-- I would imagine that the air between them needs to be approximately like a wavelength or a couple wavelengths? JACK HARE: No. No, it doesn't matter. It doesn't matter as long-- because we can only measure the phase difference modulo 2 pi. So if your reference beam goes another 100 phase units, radians, but we can only measure the phase difference modulo 2 pi, in the absence of any plasma, we'll just see the same result as if you went 0 phase units. AUDIENCE: Oh, OK. JACK HARE: Yeah. So they can be different by many wavelengths, but they have to be the same within the coherence length of the source, which defines the length scale over which you can get interference. If they're not coherent, they can't interfere. Which is, by the way, one reason why we use the same source and we use beam splitters here, if anyone was wondering. You might think, oh, I could just use two lasers. But if you have two lasers, even if by the same manufacturer, sitting in the same laser bay at the same temperature, because of the very slight differences in how those sources are made, they will not be coherent with each other. And so you are better off splitting the radiation and interfering the beam with itself. Yes? AUDIENCE: So if I'm understanding correctly, that means the lengths of the two arms can be different, but they have to be different by 2 pi multiple. JACK HARE: And if they weren't different by 2 pi multiple, that would also be OK. So if I-- we'll get onto this in a little bit, but anyway, if I plot the intensity, which is 1 plus cosine of delta phi here, and maybe plot it in time, and think about a case where they've got these two beams with exactly the same length. So if they're going at exactly the same length, when they interfere, they're going to interfere constructively. So we'll get an intensity of 2 up here. So this is where my initial delta phi is 0. And I'm not going to get any plasma in here, so just watch my diode and it reads 2, and it just reads 2 for all time. Say I now change this so that the difference in length between these two gives me [INAUDIBLE].. So this could be 0, 2 pi, 4 pi, something like that. What happens if I have a delta phi of pi or 3 pi, something like that? I chose my lengths so that my phase differences pi or 3 pi. AUDIENCE: 0. JACK HARE: 0. My detector will just read 0 forever. And if I choose something in the middle, it will read 1 forever. When I put the plasma in, the signal will start to deviate from this. And that's what I'll be measuring. But the background signal, my baseline, I can just measure that in advance. So it doesn't actually matter in reality. AUDIENCE: So it's like a [INAUDIBLE] probe where we can only measure the time change. JACK HARE: And when we start talking about temporally heterodyne interferometry in a moment, we will see a very deep link between what you're talking about there. We're really measuring time derivatives of phase, but it doesn't really look like it to start with. But we'll get on to the point where we are measuring actually changes in-- yeah, we're only sensitive to changes in phase. I guess that's the main thing. Yeah, whether that's a time derivative or spatial derivative doesn't matter. Other questions? OK. So then the final type of interferometer that I want to talk about is very simple to set up. You have your probing radiation coming through. You've got your plasma like this. And then on the other side of your plasma, you have-- what's this called? I believe this is called a Wollaston prism. This special device called a Wollaston prism. And the cool thing about a Wollaston prison is that it splits your light into two orthogonal polarizations and separates them by angle. So this is a birefringent material. It sends out some of the light up here and that light is going to have, maybe, polarization in this direction, and the other little light that's going to come down here. That light is going to have polarization in this direction. Then we put a lens here. Start sending these two back together. And if you know something about interferometry, you're thinking, well, this is useless, because these two are not actually going to interfere, because they've got orthogonal polarizations. You can prove that to yourself, if you go through this and do it in vector notation instead, you eventually end up with an ep.er. And so if they're orthogonal to each other, you don't get an interference. The beams don't see each other. But the really clever thing here is you put in a 45-degree polarizer. So again, this beam is coming in polarized in this plane. This beam's coming in polarized out of the plane. But this 45-degree polarizer picks out the 45-degree component of this, the 45-degree component of this. So by the time they hit your detector, they've both got the same polarization. And so then they interfere with each other. You're effectively creating your reference and probe beam from the same beam. So the idea here is that in fact, I've drawn it very well, you have an expanded beam that goes-- some of it goes around the plasma. And you effectively end up interfering this bit that's going around the plasma. There's the bit that's gone through the plasma on the inside. So this is called a Nomarski. Can someone close the doors [INAUDIBLE]?? The nice thing about Nomarski is very easy to align. Because we talked before about how if you have femtosecond beams, trying to get all these optics in the right places very hard. Well here, all of the splitting takes place within the same optics. So it's always in the right place, like that. But because you're interfering a bit of the plasma-- a bit of the beam that's outside the plasma, there's a bit of beam inside the plasma, you have a limited field of view. So you can only image something that's as big as the Wollaston prism you can afford to buy. Those things are expensive. But they've got very small fields in the view, so [INAUDIBLE]. And also, we'll talk about spatial heterodyne interferometry in a little bit, it's to do with the angle between these two beams here. And that angle is set by your Wollaston prism. And so your fringe pattern, which we'll talk more about later, is fixed. You don't have any ability to modify it, which you do with the other two diagnostics here. So this is the sort of thing you'd use on a femtosecond laser experiment, or picosecond laser experiment where you really, really need to have an ultrastable interferometer. But you wouldn't necessarily use this on many other sorts of assets. Any questions on that-- oh, yes. Many, OK. AUDIENCE: So in the Nomarski setup, it seems like you could only get out of spatially averaged quantities, or about the electron temperature, since you have to basically encapsulate the whole plasma with your beam. JACK HARE: Sorry, what did you say about the electron temperature? AUDIENCE: I mean, sorry, the electron density. JACK HARE: Oh, OK. Cool. Yes. Yeah. So the question was, do you only get spatially average quantities effectively if your beam is like-- if you're looking into the beam, your beam is like this, and your plasma is about this size within it. You will be able to resolve all of the line integrated electron density within this region. So you'll get a map of Ne dl that's a function of x and y here. As long as you can fit your entire plasma into the rest of the-- if you imagine translating if it fits inside the [INAUDIBLE],, you'll still get a full image of the [INAUDIBLE].. So it's not averaged over x and y, it's obviously averaged in the probing direction, z in this case. This might make more sense when we talk about the spatial heterodyne interferometry, and I realize now it was a mistake to introduce some of this stuff before we talked about some of the advanced techniques. So maybe it'll make more sense later and we can talk about this. I saw another question there. Yeah? AUDIENCE: Yeah. I Wanted to mention another geometry that hasn't been used a lot is the self-mixing interferometry, which has-- it uses diode lasers with a photodiode behind the laser so that the plasma beam just goes through-- the probing goes through the plasma and reflects off a mirror, and comes back into the same diode and mixes with the ray that goes to the other side of the laser diode. JACK HARE: So you're talking about a system where you've got a laser like this. It sends out a beam and then it comes back. And where's the detector? AUDIENCE: The detector is behind the laser [INAUDIBLE].. JACK HARE: Yeah. And then how do you get-- what's this interfering with? AUDIENCE: That's interfering with the light coming from the resonator backwards. JACK HARE: OK. AUDIENCE: Right JACK HARE: Yeah. AUDIENCE: So it's a very simple geometry-- JACK HARE: This is a type of Michelson. AUDIENCE: Kind of, yeah. JACK HARE: It's a variation. I agree it's different, but it's a variation on the Michelson. AUDIENCE: Yeah. JACK HARE: Yeah. But you're right. I'm not saying that these are the only three interferometers. People have made a lot of them and I'm just-- this is detailing three popular ones. But yeah, you're absolutely right. That's something is interesting. I've not come across it before, so thank you. I saw a question online. Nigel? AUDIENCE: Which two beams are actually interfering with each other in the Nomarski setup? JACK HARE: Yeah. Again, I think I should have introduced this later on. So bear with me, it will make more sense later on. I'll try and circle back to the Nomarski once we've done a bit of the lecture that explains what's going on. But thank you. Yeah? AUDIENCE: What does fringe fixed mean again? JACK HARE: Again, we're going to talk about what fringes do, or what fringes are later on. But effectively, you are fixed in a single configuration. And so you can only measure-- yeah, you don't have very much flexibility to change how your interferometer is set up. So really, fixed fringes is meant to be in comparison to the flexibility of the Mach-Zehnder. And to be honest, the Michelson is pretty flexible as well. But we'll get on to what fringes are and how we change the fringe pattern. And in fact, that's a big part of the problem set as well, is learning about what these look like by doing some synthetic diagnostics for them. Were there any other questions? Yes. AUDIENCE: What's the name of the polarizing material? JACK HARE: Oh, I think this is-- I'm almost certain that this is a Wollaston prism. And I think it is two slices of some birefringent material like calcite that have been cut and then rotated and glued back together in such a way that they do this splitting. It actually was a basis for how the Vikings navigated during cloudy days across the planet. But we'll talk about that when we do Faraday notation. Any other questions? All right. Let us get into some of the ambiguity that has already been referenced. So I think this has come up a few times. You probably know what the answer is to this. But say I'm your advisor, or maybe a diagnostician of D3D or something like that, and come and give you a time trace on the interferometer. And I go. What's the density? Can you tell me? Well, what's the phase shift maybe is a more reasonable question. So is it obvious? So let's draw out a sort of a tree diagram of different phases it could be. So this is time here. This is delta phi. I'm going to be nice. I'm going to say, let's assume we started at 0. So they're going back, this signal was like that. And then the plasma started here at t equals 0. And then this signal started wobbling. So this looks like, to the best of my drawing ability, some sort of cosine function. And it seems to be a cosine. And again, to the best of my drawing ability, maybe these are all evenly spaced peaks. And so maybe this is just a drawing of cosine delta phi where delta phi is just equal to some constant times time. It's just linearly ramping up, like that. Certainly one possible trajectory. Any other possible trajectories? AUDIENCE: Could be negative. JACK HARE: Could be negative. I have no way of telling the difference between those two. Now let's put-- let's start making this interesting. Let's say this is pi and this is 2 pi here because it's minus pi. This is minus 2 pi. Any idea is for other trajectories now? Yeah. AUDIENCE: I think if you add some portable sawtooth thing, you're going down. JACK HARE: Yeah, it could start going down. Or no, it's not going out. What happens here? Do I just continue? What else could I do? AUDIENCE: [INAUDIBLE] JACK HARE: I could go down. So here, I have a choice to go up like this. Here, I've got a choice to go here, like this. Here, I've got a choice to go like this or like that. I also have a choice coming from the other way to go like this like this, and it's now like driving in Boston. It's getting extremely complicated. And I can just keep playing this game. I don't have to stop any time. I am going to insist on causality. I don't think a plasma goes back in time. So we have to worry about that. But in general, this gets very complicated. You cannot tell me what delta phi, and therefore the density, is doing. You're quite right. A solution to this is a linearly ramping up density. Another solution is a sawtooth. It could be something that goes like this. Hard to get negative density, but something like that where you start with some background density here, and then density drops and goes back up. There's lots and lots of different solutions. And so the problem is that for all of these five t tracks, we end up with the same intensity signal. And so this is the inverse problem. We want to go from the data on our diagnostic back to what the plasma is doing. And we can't do it, because it's not well-posed. There are multiple different solutions. This is not completely hopeless, right? We are physicists, and so we have some intuition or priors about the world. So we might be able to use our-- I'll call them priors because that makes it sound like I'm some Bayesian person, but think of them just as intuition. So you might say, well, a reasonable thing for any plasma to do would be to not exist, exist, and then not exist again, like this. Some sort of shape that comes from simulations. Something like that. And so if you can't, the fringes here, eventually they will stop. I mean, we assume that plasma won't go on forever. And then maybe we could pair up all of with these, and count to make sure we got an even number, an odd number of that, and then we can reconstruct this. But of course, there's no guarantee that in the middle, it doesn't do that. So it may have a more complicated shape. Again, we could just do a simulation. And from our simulation, we could get a synthetic i of t. And then we can go, hey, my simulation says this, my data, the split looks the same. Therefore, it's the same as a simulation. But you haven't proved it. You just said it's consistent with your simulation. You can't prove anything using this technique or what we're going to do in a moment, which are advanced techniques. And in the problem set, you will encounter at least two other advanced techniques which we'll not be covering in class, which help to try and avoid this phase ambiguity. We're going to jump straight to the gold standard in my opinion. But those other two techniques are significantly cheaper [INAUDIBLE]. And so if you can get away with doing them, you would do those instead. So any questions on the phase ambiguity before we jump into the really good stuff? AUDIENCE: Quick question. JACK HARE: Yeah? AUDIENCE: So is there not information about the phase in the incoming light at all, then? But isn't there usually-- can't you find some phase by polarizations and such in the light? JACK HARE: I don't think the phase is linked to the polarization, no. I can have any arbitrary polarization with any arbitrary phase. They're not linked quantities. AUDIENCE: OK. JACK HARE: But if we had a detector that was fast enough, then we could actually measure the phase kx minus omega t. But we can't do that. We can only measure the phase shift with respect to our reference beam. And that is, with this setup, ambiguous. AUDIENCE: OK, I see. JACK HARE: Was there another question? All right. Probably this. Rishi looked into getting one of these fancy AI cameras that would rotate and follow me around the room so I could use all of these boards and still broadcast. But the reviews were not very good. Seems like it should be easy, right? Isn't there something that can just point at me all the time and I can wander around? But anyway, anyone needs a start-up idea and thinking of dropping out, that's one for free. Really, really huge market for egotistical academics who want to be filmed all the time. AUDIENCE: Probably need it to work [INAUDIBLE].. JACK HARE: Yeah. The motion capture [INAUDIBLE]. Oh well, it can't be more ridiculous than what I normally wear. So what we're going to talk about is a technique called heterodyning. Heterodyning. Or we might call this a heterodyne system. And heterodyning, or a heterodyne system, this is very similar to how an FM radio works. So FM radio, anyone what the F and the M stand for? Radio is this thing we had before podcasts. Yes? AUDIENCE: Frequency modulated. JACK HARE: Frequency modulated. OK, cool. And so what we're going to be doing is using some tricks to separate the signal that we want, delta phi, from some larger background signal, which is the reason we use FM. But in this case, we're also using this trick because it helps us distinguish between the phase going down and the phase going up, and also the phase rolling over 2 pi. And so these techniques are extremely powerful. And the way that we do that is we notice that we've got this phase here. And remember, this phase shows up in a function that looks like exponential of ik.x minus omega t plus delta phi. By the way, I'm probably going to drop the delta in most of this because it's-- oh, no. It looks like I've kept it. I've kept the delta. OK, ignore me. So you notice when you look at this, delta phi, which should be inside these brackets because we multiply it by the imaginary unit. It looks an awful lot like a frequency times a time or a k times an x. And this means that we can say, what if we thought that we had some sort of frequency which was equal to the change in phase in time, or some sort of k that was proportional or equal to the gradient of the phase in space. And then if we put these into this equation, they would actually end up looking a little bit like an effective k vector, or an effective frequency. So this is the basis for it. Hold that thought. And we're going to go see how that works and why this is a really, really good idea. So let's have a think about a system now where we put some radiation through a plasma, it's got some frequency, omega 1 here, and we mix it with another beam which has a frequency omega 2. So you can think about these as the probe and the reference beam, but they've no longer got the same frequency. We'll talk about how you get that in the end. And that means that the electric field that you have on the other side here has got some default electric field strength. But then it's got a cosine of omega 1t plus delta phi, that's the one that's gone through the plasma, and the cosine of omega 2t. That's the one that's gone around the plasma. So one's gone through the plasma, has picked up our standard phase shift. But we're also keeping track of the fact that in the time it's taken for them to go through this plasma, we've had a different number of oscillations because one of them is oscillating at omega 1, that everyone's oscillating at omega 2. We do the same thing again and we work out the intensity here. And we get out cosine squared of omega 1t plus delta phi plus cosine squared of omega 2t. So again, I'm just squaring these brackets here. It's not particularly complicated. And then we have the cross-term here, 2 cosine of omega 1t plus delta phi cosine of omega 2t. And then we squint at this for a little bit and we realize we know some very clever trig identities, and we end up with this formula, which looks like cosine of squared of omega 1t. Sorry, just cosine. Just cosine. Cosine of-- I'm going to write this on the next line up [INAUDIBLE]. Cosine of a term that looks like omega 1 minus omega 2t plus delta phi and a second term that looks like cosine of omega 1 plus omega 2t plus delta phi. And probably a lot of you have seen this before when you've looked at wave mixing because you see that we have a term, which is the difference term and we have a term, which is oscillating at the sum frequency. And sometimes this difference term is called the beat frequency because this is the sort of slow, modulating beat where if you play two notes slightly out of tune, that's the beat frequency that you hear. And in fact, that analogy is quite good because in general, we can detect the beat frequency. Our detectors are fast enough to detect omega 1 minus omega 2, assuming that omega 1 and omega 2 are quite close. But our detectors are still not fast enough to detect this sum frequency. And so when we do averaging over a few periods, this sum frequency on any reasonable detector is just going to be averaged out and it's just going to be 0. And this is the term that we're going to be able to measure here. Let's just do a little check for our sanity. If we say that omega 1 is equal to omega 2, we've gone back to one of our standard homodyne-- not heterodyne, homodyne interferometers. And our homodyne interferometer is going to have i is equal to 1 plus cosine phi. Don't ask me where the 1 came from. It was spotted [INAUDIBLE]. I'll have to check that later. The 1 shows up somewhere, I'm confident. I'm pretty certain it doesn't show up here because the average of a cosine is just 0. I'm wondering if it shows up somewhere between these steps as just the 1 at the front there. I suspect it does. But I can't prove that. Suspect you'll end up with a cos squared omega t plus a sine squared omega t, and that will give you a 1. But in the case that we're interested in, because we don't want to just simply reproduce what we derive before with much more algebra, we would get out-- for different frequencies, we would have i is equal to 1 plus-- there should be a delta phi up there-- cosine of omega 1 minus omega 2, and then a term that we're going to say looks like the time rate of change of the face, which we said is a frequency-like quantity, it's certainly got the right units, all times time, like this. So this detector, when the phase is 0-- when d phi dt is just 0, so say this is a phi like this in time, this is intensity. A detector is just going to chug along, recording a signal which is oscillating at the beat frequency omega 1 minus omega 2. But when that phase begins to change, this frequency itself is going to begin to change. And what we tend to do is we set our beat frequency, omega 1 minus omega 2, to be much larger than the effective frequency we get in the time rate of change of the phase. And now let me draw you a sketch of what might be happening here and then we can have a chat about what's going on. So let's imagine that we've got some time trace of density and it's just something peaked like this. And then we think, what are we going to see on our interferometer here? Like I said, in the absence of any plasma here, our interferometer would just have a signal looks like this. So this is where Ne equals 0, so therefore delta phi is 0. But when we add the plasma in, we're going to see something very interesting. We're going to start off when the gradients are small with exactly the same signal. But as we get to the point where the gradients are large here, this frequency is going to start increasing. We're going to have a faster signal. Now, around the point here where dn dt is 0, we should end up with the same frequency, which is pretty hard to draw, but I'm trying to draw the inverse of this one here. And then what happens here is dn dt is now negative. And instead of having fast waves here, we're going to have a more slowly oscillating system. So it's going to look like that before eventually following the Ne equals 0 signal here. So here, we get oscillations, which are closer, so higher omega. And here we get oscillations which are further apart, corresponding to lower omega. And I may have screwed this up by getting my sine wave wrong, which I have, because the phase is negative of the electron density. So I apologize for that. But you get the idea. If I redrew this and I had the further spaced fringes here as the density ramps up, and the farther apart-- the closer spaced ones here [INAUDIBLE].. And you have a chance to get the sign right when you do this in your problem set as well. And I will say one more thing, and then I'll take questions, of which I'm sure there are many. We now have an ambiguous signal in the sense that the signal you get from d phi dt are less than 0 is not equal to the signal we get when d phi dt is greater than 0. And if you remember, that was the problem that we had back here, where we had this ambiguity where we couldn't tell where the fire was going up or down when we reached these inflection points. And we've now solved it. So this is great. OK, questions. Yes? AUDIENCE: So if you're writing d phi dt, is that actually d delta phi by dt? JACK HARE: Yeah, sorry. I think I did drop it somewhere. Let's put a little delta over here. Might have missed it in a few other places. There's one there. Sorry. Should have been delta phi. So this is the change in the difference between the reference and the probe beam. So sort of few differences in deltas and stuff like that going on. Yeah, another question. AUDIENCE: [INAUDIBLE] JACK HARE: No, absolutely not. It works well with both, but it certainly works well with the CW system maybe better. You would like to probably have the length of your pulse be longer than the lifetime of the plasma, because you'd like to be able to see this increase-- at least the increase in density here. If you started your interferometer here-- sorry? AUDIENCE: That's in time [INAUDIBLE]?? JACK HARE: This is in time, yes. I'm doing-- sorry, this is the temporary heterodyne version. We'll do the spatial one later. So temporally heterodyne interferometry. Fun to say. OK, other questions. Yeah. AUDIENCE: If we have two different frequencies of light, how do we get around the coherence length effects that we were talking about before when we had only one frequency? JACK HARE: The question was, if we have two different frequencies of light, how do we get around the interference effects that we had before, the coherence effects? We will show in the next page of notes how we derive two different frequencies from the same source. And then they will have the same coherence. But we will-- I should be very clear that the difference between the frequencies, we can call it delta omega, is very small. So this is like detuning your laser by one part in 10 to the 8, or something like that. It's extraordinarily small frequency changes. You don't need very much to do this technique. AUDIENCE: But still the same source? JACK HARE: And so what we'll show is that you can use the same source and so we'll have the same coherence properties. But we will shift one of them in frequency very slightly. I see a question online. AUDIENCE: Hi, yeah. Could you just reiterate over the relationship between the density and the intensity that you drew on the right? So the peak density right there, you have it-- it's as if there is no phase change? JACK HARE: Yeah. So at this peak density, there is no phase change. And so the frequency of the wave will be the same as the frequency of the green wave, which is what drew as the background signal. And because I've drawn the wrong number of oscillations, it's now got the opposite phase, but that doesn't matter. It's still got the same frequency. I tried to draw it so it sort of has the same wavelength here. AUDIENCE: It's frequency-- JACK HARE: I say wavelength, but this is in time, so it's frequency. Yeah. AUDIENCE: It's frequency, but it's inversed because-- JACK HARE: Yeah. It's got the-- I've tried to draw it so it has the same frequency. So in reality, if you actually fulfill this condition, omega 1 minus omega 2 is much greater than this, then I will show you from the data I managed to find online later, what your signal actually looks like is just like this, because it's oscillating incredibly quickly and there were just incredibly tiny changes to the fringe spacing. But that's not very informative if I try and draw that on the board. So I've done a shitty version where I haven't actually fulfilled this criteria, but at least intuitively, you can see. And again, when you do the problem set, you'll be coding this up yourself. So you can have a play with changing this beat frequency compared to how much the phase is changing and you can see for yourself what criteria you need to get a good signal out and how this actually works. But it's pretty hard to draw it on the board, unfortunately. But yeah, that's the idea, is-- what you're meant to get from this is the frequency is higher here, the same here, and lower here. And that's directly related to the gradients of the electron with respect to time. Yeah. AUDIENCE: Got it. Thank you. JACK HARE: You're welcome. Sorry, go ahead. AUDIENCE: Sorry if I missed this earlier, is there a kind of limit on the frequency of the density changes in time that we can resolve? JACK HARE: You can-- so if your density is changing rapidly, you need to have a higher beat frequency to be able to resolve it. So you need to keep this going. So if you have a very rapidly changing density, like on the nanosecond timescale, then you need to have a beat frequency which is larger than that. And then your trouble is your digitizer. So if you want to resolve that beat frequency, we actually have to do here. Previously, I was like, hey, we don't care about the frequencies because we'll just average it out, but you do need to resolve the beat frequency because you do need to see this. Then you have to get a very expensive digitizer. If you can get a 50 gigahertz digitizer, you're pretty happy these days, and that's going to cost you tens of thousands of dollars. So they do this at Sandia National Labs for a related technique. And there, they have 24 channels at 50 gigahertz and it costs them millions of dollars to do it. So this is completely-- those sort of timescales are completely out of reach of a university level. If you're doing this on a tokamak, you can do it much more slowly and then you can use a megahertz scope, which you can get for like $200 off eBay. So then that would be acceptable. So yeah, when I'm teaching the course, it's meant to be our principles, so I don't talk about technology too much. But every now and again, there are some really hard technological limits that means that these techniques will work in some regimes and not in other regimes as well. You also need a detector that can detect that frequency. So we have good detectors in the infrared, and in the gigahertz, and in the visible, but we don't have good detectors, really, for x-rays, and we don't have good detectors in the terahertz range at the moment. We're getting better. So there are some other technological limits. Yeah, another question. AUDIENCE: Yeah. So you mentioned that this is an advanced technique. To me, this seems like resolving what is a pretty substantial flaw with the simpler techniques. So does anyone in reality implement the simpler techniques, or is this pretty much what everyone has to do? JACK HARE: Well, the question is, does everyone do template heterodyne interferometry or do they do the homodyne interferometry. I don't think most people will do homodyne interferometry. Did I lose it? I think it's [INAUDIBLE]. I guess this is a sample homodyne signal. You could imagine if you only had very small phase shifts less than 2 pi, then homodyne would be fine. Like if you knew off the top of your head or from other experiments that the signal just did that and never got to the ambiguous phase down here, then homodyne is fine. If you don't have a lot of funding, then that's fine as well. No, I'm serious, because this heterodyne technique is very expensive and you've got to be able to generate two different, very closely and precisely tuned frequencies. So you've got to be very stable frequency sources because if they start drifting with respect to each other during the experiment, you can't tell whether that's due to plasma or due to drift. And so this is a very expensive technique. And in the problem set, you'll come across two other techniques called quadrature and triature, which actually resolve this phase ambiguity. Quadrature almost resolves it but it has a pathological case, which hopefully you'll find. Triature does resolve it, and that only requires two or three slow detectors as opposed to one very fast detector. So there's good reasons for using those simpler techniques. Yeah. Yeah? AUDIENCE: This one doesn't really-- it's kind of a prior, but is in most cases, the shift from running upwards in density and then running downwards, is that going to be necessarily the point where the phase is exactly pi or 2 pi? So it's going to be-- you're going to be able to determine that here, my phase shift started going backwards from just the homodyne image. So you're not-- you can't tell by looking at it that at some point, you can guess that it's probably the point where the [INAUDIBLE] started going backwards. JACK HARE: You said you can guess that it's probably-- so you're talking-- I agree, about price. So you can use this technique if you have strong priors and this is one of the reasons why these are actually relatively hard to automate. You often need a human to have a look at the signal. But I completely agree, you can do a lot of stuff if you have some strong priors about what your plasma is doing. If you are trying to measure density fluctuations in the edge of a tokamak, I would argue that it's very hard to have good priors for what that is. And so this is a very difficult technique. If you're trying to measure the afterglow of a plasma discharge where it exponentially decays, then yeah, it's fine. But yeah, I agree. You can get around these with a little bit of extra knowledge. But if you don't have any good priors, if you're too stupid to have any intuition, you can spend a lot of money and resolve that ambiguity entirely. So that's always a nice thing in diagnostics. So any other questions? Yeah? AUDIENCE: When you started this discussion [INAUDIBLE].. But why don't we write that down again? And how [INAUDIBLE] omega 1 and 2? JACK HARE: It's over here. So we had-- if I skipped a line here, it was because this bracket is really omega 1 minus omega 2 times t. I think we can agree on that. It's actually-- sorry, it's up here. So we definitely have omega 1 minus omega 2 times t. What's interesting is outside the times t, we still have this delta phi phase. But if delta phi is changing in time, the oscillations which it produces in i are going to look like the oscillations that we produced if we treated d phi dt a frequency. And I think that's kind of a difficult thing to get your head around. But like if this changes in time, you're going to see changes in i that look like oscillations, and those oscillations are going to look like the same oscillations you'd get from having a frequency. So effectively, this forms a third frequency omega 3, this one here, which is mixed in with the other two frequencies. AUDIENCE: So do we just drop k, or? JACK HARE: Oh. That's spatially heterodyne interferometry. We'll talk about that next. AUDIENCE: So if you think of this demonstration [INAUDIBLE]?? JACK HARE: Yeah. So what we're doing here is-- what we're doing here is we just have a single ray of light going through the plasma and the detector, which is not an imaging detector. It's just a diode, which is recording a signal in time. If we want to use this and we want to measure gradients in phi, then we need to have an imaging detector like a camera. And it's likely that if we do have an imaging detector like a camera, we will not have the resolution on it to detect frequency changes. We'll get one picture or something like that. And so we usually either do this temporary heterodyne version or the spatially heterodyne version. And when we do spatially heterodynes, we're not mixing in omega 1 and omega 2, we're mixing in k1 and k2. And we'll talk about that in a moment. They're mathematically identical, it's very beautiful. This one is simpler because we don't have any dot products in it. So I'd like to start with this one. And this is also the one that the tokamak folks are more likely to use. But in my research, we used the spatial heterodyne version like this, which gives you pictures of the plasma and the electron density within it. Any other questions? Let's keep going. Maybe I should have drawn this earlier. So actually the way you can interpret this temporary heterodyne system is you're going to be doing Fourier transforms on this signal. And you'll be doing Fourier transforms of small windows so that you get the frequency within a small window because obviously, the frequency of your blue curve is changing in time and that's what you want to detect. So you can plot this, thinking in frequency space. So we take our Fourier transform of i of t and we get out some frequency space intensity. For a homodyne system, we have omega here and 0 at the center here, we might, for example, have a signal here which corresponds to d phi dt. So on some short time window, this is the frequency induced by the changing the phase. But we can't tell the difference between this signal and one at a negative frequency. So when we look at the Fourier picture, we see why there's this fundamental ambiguity. So this is for a homodyne system. And the equivalence of these two is our ambiguity. When we have a heterodyne system, we have 0 here and omega here, we are mixing in our phase signal with this beat frequency. And so this beat frequency, omega 1 minus omega 2, means that if omega 1 minus omega 2 is greater than d phi dt, then the signal that we would get from plus d phi dt now occurs at a different frequency from the signal that we would get at minus d phi dt. And so you can clearly resolve whether you got plus d phi dt or minus d phi dt here. So this is the heterodyning system. And this is where the link to FM radio comes in. In FM radio, we have a high frequency carrier signal that's equivalent to our beat frequency. And the audio is encoded as a modulation to that high frequency carrier signal. For FM, it's so that we can filter out around that carrier signal, and those carrier signals transmit better. In this case, that does help with the signal to noise issues, but also more importantly, it means that, again, we can tell whether d phi dt is positive or negative, whereas in this case, it would give us, with the symmetry of the Fourier transform, the equivalent signal to if we had a negative frequency so that [INAUDIBLE].. So hopefully that helps those of you who like Fourier transforms to think about it. If not, maybe something to ponder [INAUDIBLE].. So just a quick note on practicalities. So practicalities, in this case, is generating omega 1 and omega 2. So as we already said, we really want just one source so that we can keep coherence. So one thing you can do is you can put in-- you could split off your incoming laser at omega 1 with a beam splitter and you can reflect half the light off a mirror, which you are rapidly accelerating towards or away from the beam. And then we would have omega 2 is equal to omega 1 plus v upon c. We're able to get the mirror to go at v. Obviously, this is only going to work for relatively low frequencies where we only want a relatively low frequency. We can't have this mirror going-- moving all velocities. But for small frequencies, the Doppler shift on that will be pretty significant and that works pretty well. The trouble is, of course, presumably, your mirror has to stop somewhere. And so you have a limited amount of time you can accelerate the mirror for-- move the mirror in one direction before it stops, and so that will limit the length of your pulse. If you want a continuous version of this, you have a reflecting wheel that is going at some frequency, v, like this, and you bounce your light off it. Omega 1 comes in, omega 2 comes out like that. And we find that omega 2 here is going to be equal to omega 1 1 plus b over c over 1 minus v over c, because it actually gets Doppler shifted twice once on the way in, once on the way out. This should probably be more like a grazing reflection in order to be able to use this. Otherwise, there'd be some geometric correction to it. And then the final thing you can do is this really neat technique which is called an acoustoptical modulator. These are useful for very many different things, and effectively, it's a material that has strong sound waves inside it. And you put in one of your beams of light and when it reflects off, it's undergone an interaction with the sound waves here. So we have photon or phonon scattering. And depending on the frequency of that phonon, we can get out a different frequency here. So those are three different ways that you can get out different frequencies that you can then use these heterodyne techniques, but all using just one source here. So there would be a beam splitter here that takes off some of the omega 1's go through the plasma and the other part of the omega 1 reflects off the wheel, and then that gets interfered and becomes this one. Yeah, any questions on any of these? Yes? AUDIENCE: The heterodyne system, do you determine the path length just by tuning where you're putting your beam and beam pointer, or is there some systematic way of-- JACK HARE: Oh, for a place where you want to get within the coherence length? AUDIENCE: Right. JACK HARE: Yeah, you'll have to tune it. I mean, what I would do is I'd measure it by hand very closely first and then I would move an optic back and forth until I got a good signal on my system. Or you can do a lot of this in fiber. Depending on if you want to use telecom wavelengths, 1,550 nanometers, you can use fiber optics and stuff like that, and then you know your fibers are the right length and so that's very convenient. So if you're doing CW, the coherence length on CW beams, continuous wave beams, is extremely long. It could be kilometers on a telecom fiber. So you only have to get it right within a kilometer, which is usually easy in a university lab. AUDIENCE: Yeah. OK. JACK HARE: Other questions? All Right. Let's do some spatial heterodyning. I had some pictures to show but we haven't got far enough yet. So I'll have to show the pictures next time. There are some nice ones in Hutchinson. So now we're not dealing with signals that vary in time. We're interested in signals at a single time, but now making an image of the plasma. And I warned you that I've been drawing lots of wave diagrams, and would eventually come back to drawing wavefront diagrams. And here I'm going to make good on my threat. So here is our probe beam coming in. Now in reality, this is going to be a two-dimensional system. So if I look dead on to the plasma, I will see the plasma here, and I'll see some probing radiation source, which ideally would be bigger than the plasma for reasons we'll discuss later on. But I'm just drawing it from the side and I'm drawing it in 1D. But you can imagine I could draw these as square phase fronts coming in. We're not going to do that, just wanted to give you an idea. We can do all of this just thinking about it as a 1D system. And we've split some of this light and we have sent it like this, around the plasma. So this is coming in k2. This one is going k1. The k's is here are vectors, and these k's refer to the direction the radiation is traveling in. They refer to the normals, the phase fronts. And so this is-- yeah, this is a vector system in this case here. And then this is our beam splitter. And just to test you, I've drawn it wrong. And of course, it needs to reflect off like this. Maybe I should put these reference beams in a different color. Might make my life easier later on. In the absence of any plasma, the beam is just going to go through and the phase fronts will remain completely flat. And then on our detector, which is somewhere back here, you'll get a nice overlap between the perfectly flat phase front that has not gone through any plasma, could have done, and a perfectly flat phase front that hasn't gone through any plasma because it's the reference one here. So this is the reference. So my detector, I just have the same phase. Maybe they're in phase. We get constructive interference. And so the image, if I expand this detector out like this, it would just be one uniform color. One uniform shade of gray. And that uniform shade of gray would be 2, in the sense that it's twice as bright as either-- as what you would expect if you didn't know anything about these things. Now let's put the plasma in and let's see how the plasma distorts to bright. Remember that if there's any density inside here, our phase velocity is actually faster. Our refractive index is less than 1. N is equal to 1 minus Ne over 2Nc. So when this phase front encounters the plasma, it will start to go faster. And in fact, it will start to distort outwards like this. And then when it exits the plasma, it will still be going faster like that. Now when it reaches this detector, the reference beam is still going to be the same, but our probe beam is going to have a delta phi which is larger. And that delta phi is going to be different in different places. There's going to be a delta phi here, which is relatively small, and a delta phi here, which is relatively large. And so what that might look like in our detector for a homodyne system, and this is still homodyne at this point, is we might have fringes like this, regions of lightness which are just concentric like that. And the trouble with these fringes is exactly the same problem we had with the homodyne, temporally heterodyne interferometer. If you're trying to take a line out across here and you have some signal like this in intensity, and now instead of time, this is some spatial coordinate, you can't tell me whether the density goes like that or whether it goes like this. Because you can't tell me each time I get to a constructive interference fringe, whether that's because the phase has gone up by 2 pi or back down by 2 pi, or even worse, has stayed the same. So this, this is no plasma, this is the homodyne system, and now finally, we can consider the heterodyne system. So in the heterodyne system, we tilt these fringes. And so we have a slightly offset A2. That means that we have tilted fringes coming into our detector. So in the absence of any plasma, but this is now at third plasma, we will still have interference fringes because we've tilted one phase front with respect to the other. And we'll just have straight interference fringes like this. But when we put the plasma in, these fringes will now be bent. So maybe they'll still be straight at the edges at the outsides of the beam where there is no plasma. But in the middle of the beam, like this. And can you see once again that the fringes are getting further spaced apart and bunched up together again? And this corresponds to regions where there is high density gradient and regions where there is a low density gradient here. And so this looks like if I take a line out in this direction here, I would end up with a signal that looks an awful lot like the one, and if I've still got it. That signal there that's hiding underneath. So it might look like something that has big fringes and then very rapid fringes like that. So this would be intensity. And again, this would be some coordinate like y, not time. But you can see there's a very deep mathematical link between the two of them. If I take a slice through any of my spacey heterodynes, interferograms, I'll get something that looks like a temporary heterodyne interferogram. I am running over. I will take some questions, but we will pick this up next lecture, do a review of all of this. Hopefully, it will start to make sense in [INAUDIBLE].. But yeah, questions. Anyone online? OK. We will pause there. You can spend the weekend thinking about heterodyne interferometry. Come back fresh on Tuesday and it'll all make sense, I promise. OK, thank you very much. See you guys later.
|
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
|
Lecture_2_Magnetics_I.txt
|
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Welcome to our second lecture. Today, we are going to be talking about magnetic probes. These are surprisingly simple devices. But you can also measure a surprising amount about your plasma from these. So we'll start by considering just a simple loop of wire. So let's imagine we have a loop of wire like this. It's got some area A. And it's got some magnetic field pointing out of the page, B, like this. And we can imagine that this is a time-varying magnetic field, B of time here, OK? And then we go and grab one of Maxwell's equations. We'll take the curl of E is equal to minus the time derivative of B here. So I'm writing that as B-dot. We can take this equation and integrate over this area here, integrate up the curl of E over this surface, the dotting with the S and doing the same thing on the other side, integrating up the magnetic field dotted with S, like that. And then we can use one of our nice vector calculus identities and convert this from being a surface integral over the area into a line integral around the curve which bounds this surface here. And that will allow us to write this instead as the integral of E dot dL. And we recognize that if we're integrating an electric field around a path, this is just going to end up being a voltage. So it'll be a voltage difference between these two points-- this point here and this point here. We're not going to measure it right here. We'll attach these with cables to some digitizer or oscilloscope out here. So there'll be a potential difference around this loop. We'll call that potential difference V. And that's simply going to be equal to the integral of B-dot dotted with dS here. We've already said that the area of this surface is simply A. So we'll just write that as A. And then we'll write this as B-dot here. And if you want to increase the voltage you get for a given time-changing magnetic field, you can put more loops on this. So your probe could have one loop, or it could have two loops, or it could have n loops, like this. And we would simply multiply this by n at the end there, where n is just the number of loops we have. So that's how you make this more sensitive. And so because of this dependency on B-dot here, these are called B-dots, very inventive. OK. So this is a very simple diagnostic. It senses magnetic fields. Now, I've made a few approximations or assumptions while I've been deriving this. Can someone give me a reasonable condition on the magnetic field, particularly thinking about its spatial variation, that enables me to derive this simple equation? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, OK. And so we might have some length scale over which this magnetic field varies. So we call this LB. And we could write this as B over grad B. And you'll excuse my sloppy notation here. I've sort of dropped the fact that these are vectors here. And we have some length scale associated with the size of the probe. So we can call this LP. And that's going to go like square root of A. And so we want to make sure the magnetic field doesn't change much over this length scale here. So we might demand that LP over LP is much, much less than 1 here. So effectively, your magnetic field is only changing in time. It's not changing spatially. Of course, if it is changing spatially you will still measure something. You'll still get a voltage out. It's just that your interpretation won't be quite as straightforward as this. If you know something about how the magnetic field varies spatially over your loop, then that's OK. But if you don't, it's going to make your interpretation very, very tricky. OK, any questions on this? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: OK. So these are two characteristic length scales that we're comparing here. LP is a characteristic length scale of our probe. It's just going to go as a square root of the area. And LB is the characteristic length scale over which the magnetic field varies. If you look at this dimensionally, you see that it's going to have dimensions of length. And so we're just saying that our magnetic field length scale has to be much larger than our probe length scale such that the magnetic field is basically constant across the probe. Anyone else? Anyone from Columbia? OK, good stuff. So of course, now, you've got your voltage out. And you've digitized it with your oscilloscope. Or at least if you haven't digitized it, it's coming out these little cables here. And of course, you need to do something with it to get B-dot. You need to integrate it, right? So you have got an equation V equals A B-dot n. And you want an equation that looks like B-dot equals V over An. And so therefore, B is going to be the integral with respect to time of B over An like this from time 0 up to whatever time T you're at at that point. OK. And there are a few ways to do this. Back in the day, you might want to try and do this automatically because doing this numerically may be computationally expensive. You can do this passively. You could have some sort of RC type filter. Those of you who know your analog electronics would recognize something like this, where we've got our B-dot loop at the end there. And we've got some resistance and some capacitance. And this forms a low pass filter which, depending on the frequency of your signal will passively integrate it up. So that's very simple to build. If you're trying to put 1,000 B-dot probes around your tokamak, you might think that this is a nice solution. If you want something a little bit more fancy, you can do this with an op amp. So we can put an op amp in here. And we can put in something where we have a resistor and a capacitor for each of the inputs. And this is nicer because, of course, it's going to buffer your signal. As well, it makes it easier to do the impedance matching. So you might want to consider an active system like this. So just to be clear, this is passive. It has no electrical components that require power. And this is an active system. And of course, the third way you can do this, especially these days, is simply take this signal and do it digitally. So we can just sum up V over An times delta T, where delta T is whatever your time base is. Now, you have to be a little bit careful here because there are other things which can induce a voltage. Oh, look, it's still there. There are other things which can induce a voltage on your probe other than just the magnetic field. But you only measure the voltage at the end. So you don't really know if something else has come and interfered. But there are some tricks to try and get around that. So let's have a chat about some other voltages that might crop up on your probe that you're not aware of. You could, for example, have some stray capacitively coupled voltage on here. So you may end up having a system where you have a loop which has a voltage on it, a B-dot, as you expect, but there is also some other capacitively coupled signal, like noise. So your signal would just look like this. But on top of it, you've got some other noisy signal that corrupts it. And as you start integrating this up, it's going to make a bit of a mess of your integration. If this is truly just noise, if it's randomly distributed, the integration is obviously going to get rid of that. It's very nice for smoothing things out. But if this has some structure-- for example, it's related to potential inside your plasma-- then this is going to screw up your measurements. So the way we get around this is we perform a differential measurement. We have two B-dots. One of the B-dots is wound in one direction, like this. And the other B-dot, which we place as close as possible, is wound in the other direction, like this. So the sense in which we evaluate this dot dL integral is in opposite directions. So for probe 1 and probe 2, we get different voltages out. For probe 1, we're going to get a voltage that looks like V1 equals A B-dot plus this capacitively coupled voltage. But for 2, the sense in which the L goes is opposite. So we get V2 equals minus A B-dot plus some capacitively coupled voltage. And so we can do a trick that we like doing in experimental physics, which is to make a differential measurement. And we can take the difference of these. We can take the difference of V1 minus V2. And you'll see that these capacitively coupled voltages, which are the same on both loops, will cancel out. But we will amplify up the signal that we actually want. And so we'll end up with 2 A B-dot, like this. So this is, as I said, a differential measurement. And whenever possible, if you're trying to get rid of noise, you want to try and do things like differential measurements. So if you're asking where does this capacity coupled signal come from, if you just imagine having your B-dot probe like this and there's some plasma floating over here, this plasma may be at some potential B plasma, like this. And your probe may be attached near ground. So it's floating at close to V equals 0. But you can see effectively there's going to be some capacitive coupling. If I drew a circuit diagram, it'll be as if there was some capacitor between these two. And that would be enough to induce some small fraction VC on here, which is going to be much, much less than the plasma potential. But the plasma potential could be a kilovolt or so if you're dealing with electron volt temperature plasmas. And so this VC can still be large enough that it will throw off your measurements. This is why it's worth doing differential measurements if you can. Any questions on that? Yes? STUDENT: What exactly [INAUDIBLE]?? JACK HARE: Yeah, that's a good point. Yes, it is. STUDENT: That ever [INAUDIBLE]. JACK HARE: Yeah, we'll talk about that more in a minute because if you've got very slowly changing magnetic fields where this is an issue, like on a superconducting tokamak, B-dots aren't really going to do any good because your signal is going to be absolutely tiny because B-dot will be very, very small. So your voltage will be very, very small. So you probably want to rely on other sensors to do that instead. Right, this really should read like delta B. This is the change in magnetic field. So you can't measure a DC magnetic field with these. Yeah, and there are lots of problems when you do this practically with drift. So if your oscilloscope isn't perfectly calibrated and it's got some slight offset to it, that's going to be integrated up as a gently ramping up magnetic field, which is another good reason to do differential measurements because you can get rid of those offsets as well. Any other questions? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Oh, well done. Yes, that's a good point. I should have mentioned it. I've assumed here that the magnetic field is perpendicular when I drew this diagram. It's parallel to the normal of this surface here. So that's where we have B-dot DS. Of course, the magnetic field could be at some arbitrary angle to it. We're only going to pick up the component which is normal to the surface. So this is insensitive to magnetic fields, for example, pointing in this direction. This is actually quite important in the tokamak where-- or in other devices, where you may not actually know what the magnetic field is. We know what the vacuum magnetic field is. But once there's current flowing in the plasma, we may not know which component we're picking up. So you may also want to have not, for example, just two loops for this measurement, but three axis B-dot probes as well. And so on laboratory astrophysics devices like LEDD and MRX, they have these arrays of probes with three different axes, so they can measure the full vector magnetic field. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so the idea here is this could be due to anything. This is due to some high voltage being present somewhere in your system. So it could be a capacitor bank discharging. Or it could be a killer electron volt plasma. But the fact is that any conductor and any other conductor have a capacitance between them just always. There is an ability to induce some potential on one conductor from the other. And so all I'm saying is if you have some stray potential, it will couple with the same sign to these two probes. But because they're oppositely wound, we'll be able to cancel out that same sign potential there. That's the other reason why you want the probes as close together as possible because if you put them further apart, this VC may be different on the two probes. And then they won't cancel perfectly. So ideally, you'd have two loops perfectly counter wound lying directly on top of each other, yeah. Any questions from Colombia? I see two hands. Maybe Nigel first? STUDENT: Hi, could you remember to repeat questions when they're asked in person? JACK HARE: Yeah, thank you. I forgot about that. Yeah, was that Matthew's question as well? Or is it a different question? STUDENT: No, I have a separate question. Nigel, did you have anything else? STUDENT: No. STUDENT: OK, so do you have to worry about inductive coupling between two oppositely wound up probes that are very closely spaced? JACK HARE: The brief answer is you'd have to worry about that if the current flowing through the probes was sufficient to create a magnetic field that was comparable to the magnetic field, you're trying to measure. So in general, it's not. Yeah, it's a good question. But say you've got a volt or so that you're measuring. And you're measuring it over a 50 ohm terminator, which is what we normally do for the matching with 50 ohm cables. That's going to be a pretty small current. So the magnetic field you're inducing is pretty small. It's a good question. STUDENT: OK. All right, thanks. JACK HARE: Yeah, another question? STUDENT: [INAUDIBLE]? JACK HARE: Right, if you integrate a zero mean Gaussian noise, then you're right. You will get rid of VC. But it doesn't have to be a zero mean Gaussian noise. This VC, in fact, will probably represent something like-- say you're doing a discharge in a plasma. The voltage will rise and fall in a similar way to the current in a magnetic field. So you might actually have a system where, let's say, this is your signal B-dot, like this. And then your signal is doing something like-- let me draw it-- like this. So this is a capacitor bank ramping up and discharging again. And so you can see, if you average that out, you're not going to get rid of it. So this is for non-random one, yeah. STUDENT: I felt like [INAUDIBLE] JACK HARE: Oh, I guess you're right, yeah. It's going to be relatively small depending on how big your signal to noise ratio is. But you're right, it's like a random walk. So you're going to diffuse away from where you start, yeah. Yeah, I guess, as with all measurements, you want to have a good signal to noise to avoid that. STUDENT: [INAUDIBLE]? JACK HARE: It depends. You should design as a diagnostic designer a system which gives you sufficient signal to noise, which might mean increasing the area of your loop. Or it might mean increasing the number of terms in order to fulfill that. OK, I'm going to move on. So just to say, you can take these two signals here and you could feed them in automatically to an op amp similar to this. And you can make an op amp circuit that does subtraction. And then you could feed that into an op amp circuit which does integration. The trouble is, unless you inspect these two signals visually, you may find that one of them is more affected by the capacitive coupling than the other. This is a problem we have in my research, where one of your signals might look really nice, like this. And the other signal just goes like that. And if you feed these automatically into an op amp without digitizing them and just accept whatever comes out the other end, then you're not going to get a decent answer from this. This data is junk. You want your signals to look as close to opposite as possible. So probably you want to digitize them separately. Anyway, we're going to be focusing at the moment on B-dots which are placed outside of the plasma. I will talk about B-dots placed inside of the plasma because that's something that I like to do in my research. But for most of the time, if we're dealing with things like tokamaks, magnetic confinement fusion devices, you don't want to stick things inside there. They're just going to melt. So we'll be sticking with external probes. OK. There was a question earlier about measuring when B-dot is small, when we've got almost steady state magnetic fields. So let's have a quick chat about that. So when we have B-dot goes to 0, we'll call these steady states. Of course, they don't have to be absolutely not time changing. They just have to be time changing slowly enough that we can't measure the voltage easily. So this would give V on our B-dot also going to 0. So at this point, you probably want to resort to a different device. And these are often called Hall effect sensors. And another name for a Hall effect sensor is a Gauss meter. So the way these work is we have a block of a special semiconductor. So this is a terrible drawing of a cuboid. There we go. There's a magnetic field that's threading through this that we want to sense. We allow a current to flow through it in this direction, from this face to this face back here. And because we have charged particles moving in a magnetic field, there's a J cross B force which separates the charges. And so on one side, we build up a positive charge. And on the other side, we build up a negative charge. And if we attach our oscilloscope here, we can measure that potential difference. So there's going to be an electric field in this direction. And by knowing the properties of our semiconductor and measuring this electric field, we can infer what the magnetic field is here. Again, there's a J cross B force that separates the charges in these two directions. So this works pretty well. It doesn't work very fast. So you can't use these replace B-dots. If you want to have a very high frequency signal, this doesn't work. And the other problem is that the plasma is a very harsh environment, especially if you're going to a nuclear environment, like the next generation of tokamaks. And that causes degradation of this semiconductor. And then you have to work out how you're going to calibrate these things in situ. So you can't just pop them out because they're hot, radioactively hot. So these sorts of devices are very good. But we really need to have rad-hardened versions of them. And as far as I know, from at least the last time I looked, I don't think that we have a solution to this problem at the moment. But I may be wrong. Another technique you might want to use is Faraday rotation or the Faraday effect. We'll actually talk about this a bit more later on in the context of Faraday rotation inside plasmas. But in fact, the Faraday effect was first discovered by Faherty, who didn't know anything about plasmas. And he was using, well, actually, all sorts of things. If you go back to Faraday's notebook, it's kind of remarkable. He built one of the first large magnets. He also had an ability to detect the polarization of light. And he did what any reasonable experimental physicist would do. He started putting things inside the magnet and seeing if they rotated the polarization of light. And so he's got in his notebook milk. Milk does. Beef. Beef doesn't because the light didn't go through. But fair on him for trying. It's great. So yeah, there's all sorts of cool things. Now, these days, we don't use beef inside most experiments. What we use is a special type of glass which has-- it's usually called verdet glass because it's got something called a verdet constant. This is just probably some French guy's name. So if you've got some magnetic fields in this direction and you pass some linearly polarized light-- so we'll start off with a polarization, for example, in this up-down direction. When the light comes out, that polarization is going to be rotated. And the angle that it's rotated by, beta, is equal to V, which is the verdet constant-- don't get this confused with voltages. This is just a constant that is a property of this glass-- and then times by the magnetic field times by the length of this lump of glass here. OK. And so this means that if you put a little chunk of this glass inside a fiber optic, this is extremely convenient because you can just put that fiber optic somewhere. You don't have to worry about mirrors for this light because it's all trapped inside the fiber. And we have lots of really great technology and fibers for looking at polarization because the telecommunications industry uses it for multiplexing. And so you can sense this polarization. And that means that you can then infer what the magnetic field is because you know V. You know L. You measure theta. And therefore, you get B. Once again, this is B in the direction of propagation. So same with the B-dots, it's got that limitation. You don't get a vector out. And the other problem is that, again, is rad hardening. And actually not just rads, as in like neutrons and things like that, though that will definitely alter the structure here and therefore change the verdet constant, but even things like X-rays can cause blank inside this glass. So people have tried to use this on the Z-machine at Sandia, which is the world's most energetic X-ray source. And they don't work very well because the X-rays get inside the fibers. And they blank things out. So this is another possibility. But once again, it's something we need a lot more technology development on before we can use it in the harsh environments we expect in fusion devices. So any questions on the Hall effect, gas meters, or Faraday effect? Yes? And I'll try to remember to repeat your question. STUDENT: [INAUDIBLE]? JACK HARE: OK. So the question was, why can't the Hall effect be used for fast changes? Simply because there is some drift velocity associated with these carriers. And we can't make them drift fast enough. So if you have a very rapidly changing V, they'll all be sloshing to the right. And then by the time they've got there, they'll be sloshing back to the left. And they'll never make it back. So I don't know off the top of my head what the time scale is. But it may be in Hutchinson's book. Or you may be able to look it up. But my impression is that this is more for milliseconds, second kind of time scales, where B-dots can do much faster. This can be very fast. Faraday effect sensing can be as fast as your digitizer. So one can get a 50ghz digitizer these days, if you have a lot of money. So you can send very, very fast. And yeah? STUDENT: [INAUDIBLE]? JACK HARE: No, I don't know what Spark is using. So there's a good question. They presumably have a plan. I haven't seen like a diagnostics paper for Spark yet. I keep nagging Spark people to do something. But I haven't seen it yet. So it'd be cool to see it. Other questions from Columbia? STUDENT: I have one. For the Faraday effect, how do you account for you have to modulo 2 pi with the angles. You take it across multiple-- do you measure the angle at multiple stages? JACK HARE: Oh man, we're going to be talking a lot about ways to disambiguate modulo 2 pi stuff when we get into promontory. So I'll just say any technique you can use there, you can do here. So there's some really cool stuff you can do. Yeah, we can do like temporary heterodyne polarization measurements, things like that. So yeah, it's doable. This simple method here is flawed, like you pointed out. But there are other things we can do, which can get around that ambiguity. Yeah? STUDENT: Do perpendicular components of the magnetic field affect the Faraday effect sensor? JACK HARE: The question is, do perpendicular components of the magnetic field affect the Faraday effect sensor? I don't think so. There is an effect in plasmas. It's called the Cotton-Mouton effect, which can have an effect. But I don't know what happens in verdet glass. It's kind of like a solid state kind of thing, which I'm less familiar with. So I've never heard people talking about that as being a problem. Good question, though, yeah. Another question here? STUDENT: Does the verdet constant depends on the wavelength of light? JACK HARE: Yes, the verdet constant depends on the wavelength of light, maybe not very strongly. And you would know what wavelength you were using. So it wouldn't be a big problem. So for telecoms, we use 1,550 nanometers because that's what's being developed. So if you're going to do a fiber diagnostic, it's probably going to be at 1,550, unless you want to spend a lot of money on custom [INAUDIBLE].. All right, let's keep moving. OK. And now, a related diagnostic, where should we go? So a related diagnostic, another firm favorite of mine is something called a Rogowski coil, a rogowki coil, depending on what you think [INAUDIBLE].. So a Rogowski coil measures the enclosed current. Enclosed by what, you ask. Enclosed by the coil. So I will draw on in a second. And you'll see what I'm talking about. So a Rogowski coil looks like-- let's see what the best way to draw this is. We have, a little bit like a B-dot, a cable coming up. Instead of just going around in a loop like this, we have something that spirals around in a helix. And then for reasons I won't go into right now but you may feel free to ponder, we tend to not take the cable down here. But instead, we wind it back all the way around inside, like this. The answer to this is in Hutchinson's book. But I'm not going to go in right now. And the current that's enclosed-- for example, we can consider that we have some sort of current carrying rod, like this, that goes through this Rogowski coil. So we've set our Rogowski coil around some sort of conductor. Obviously, that current has to close. And it will close outside the conductor, like that. OK, so let's have a look at the geometry here, just a simpler diagram. We've got current. We've got some sort of surface like this. And we've got all these little loops. And these little loops-- hey, hey-- have an area. There's a current here. So there's going to be a magnetic field through it, like that. There's going to be two dL's. There's going to be the dL around this little loop. But there's also going to be the dL prime around this loop, like that. So we're actually going to end up with a double integral. We're going to end up with having a flux that is going through this loop. This magnetic flux is going to be equal to the number of turns per unit length because we're going to have multiple of these little surfaces all the way around like this. We're going to be integrating along L prime, which is the circumference of this Rogowski. And we're going to be integrating over A, which is the area of one of these little loops here. We're going to be integrating B dot dL-- actually, that's not quite right. B dot dS again dL, like that. Now, the key insight here is that this flux is going to be due entirely to the current that's going inside here. We know from Ampere's law that the integral of the magnetic field around some path, dL, which is this one here, using dL prime, is going to simply be mu I enclosed. So this magnetic field B here, at least on average around this whole circle, is going to be proportional just to mu, which is going to be the permeability of whatever material we're using, times the enclosed current. And the voltage that we get out, down here, which is the thing we're going for, is simply going to be the time rate of change of the total flux through this circle circuit. And so we're going to end up with something that looks like nA mu I enclosed dotted. So once again, we've got a voltage, which is proportional to the time derivative of something. It's also proportional to an area. Instead of having a total number of turns n, we now have a number of turns per unit length. So this is turns per unit length. This is not the area of the whole thing. It's the area of one of these little terms here. So this could be, for example, pi A squared if the radius of one of these little loops is A, like that. And the only reason I wrote this as mu as opposed to mu 0 is because it's really mu 0 mu R. And you want to be a little bit careful here because, for some materials, if you get very strong magnetic fields, they have saturation of mu R, things like steels and stuff like that. You want to avoid those because this mu R is going to start changing depending on the strength of your signal. And so you're going to have not a nice linear relationship between voltage and current, but a non-linear relationship. So the only reason I put this in here is just to say that you shouldn't use steel for these things if you're going up to high magnetic fields. The nice thing about this is it doesn't matter where the current is inside here. Remember, Ampere's law just talks about enclosed currents. I could split this up and have two conductors, like this. I could just have one conductor over all the way on one side. I would still get the same signal out of this. That's nice because it means I don't really need to know the exact location of all the conductors inside here. That means that I don't have to position this thing quite so precisely, which is extremely useful. So this I enclosed here means that the conductor position, or should we say current channel, position is independent of this. OK. And you can go and wind one of your own of these. You can strip back a BNC cable and get some magnet wire and wrap it all the way back around, which is what Thomas Varnish has been doing for [INAUDIBLE] experiments. Or you can go online and you can get something from Pearson. So everyone buys Pearson's. These nice, green loops. And they have a passive integrator built into them, like we discussed before. And they're labeled in volts per amp. So if you just want something off the shelf and you've got lots of money, you can buy something like that. So these are absolute workhorses not just for plasma physics. These are just if you want to measure current going through something. So any questions on these before we keep moving and show how to use them in plasma systems? Mm-hmm? STUDENT: [INAUDIBLE]? JACK HARE: Yeah. Once again, yes, you're right. It only measures the current normal to this big surface here. So I guess I glossed over that. Yeah, you're quite right. STUDENT: [INAUDIBLE]? JACK HARE: It's going to happen somewhere in Ampere's law. And I think when we talk about I enclosed here, we are-- huh, Interesting. Maybe it doesn't matter because that doesn't look like we care particularly about the angle of the conductor here. I don't know the answer off the top of my head. If someone else does know the answer, shout out. Otherwise, I'll have a think about it and get back to you. STUDENT: [INAUDIBLE]. JACK HARE: OK, so I've skipped it in that line. Yeah, I think you're right. So OK, I think it does matter what the orientation of your conductor is. Yeah, cool. Thank you. Other questions? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: No, you can't use this for constant current still because the signal V is proportional to I dot. So you'd have to use another current sensor if you want to do that. And these suffer from similar problems to your B-dots, where you can pick up stray voltages on them. So you might want to use an oppositely wound pair of these side by side and do differential measurements. There's also a continuous version of this, where you replace this helix with just a groove in a plate of metal. And I'll leave it to you to think about how that works because it's actually a little bit mind-bending. But it does still work. What I want to do now is talk about how to use these in the context of plasma physics. So any other questions before we move on? Columbia? All right, looks like we're good. Okey doke, let's have a think about something like a tokamak. I'm going to draw the cross-section of a tokamak like this. It's going to have a plasma inside it. And in that plasma, there's going to be some current. This is going to be the toroidal current, I phi, which loops around the torus. This toroidal current is being driven inductively by a transformer. So this is an iron core. And we have a primary winding. And the tokamak plasma forms a secondary winding. So we're driving some current using this ohmic transformer. We can drive a constant current with this. That will require us to keep ramping up the voltage on this transformer. But that's allowed. And what we'll do is we'll put-- around one part of our plasma, will put our Rogowski coil. So here comes Rogowski coil, loop de loop de loop, back wound. And we measure V goes as I phi dot. OK, so we can measure that current. I also want to draw this same system now looking from above because it makes it easier for what the next point is. Say so, this is now our torus. We've got our plasma, like this. Hopefully, it's not that unstable. And we've got some toroidal current, I phi, going around it. We've got some transformer, as already discussed, like that. We've put in our Rogowski coil, like that. And we're also going to put in another loop. And that loop is going to be our voltage loop. So this loop simply looks like this. And it measures V in the toroidal direction. The reason it does this is because this transformer is inducing some toroidal potential. It has to. That's what's driving this toroidal magnetic field. This is equal to V toroidal over the plasma resistance, whatever that means really in this context. Now, ideally, we would measure this by sticking something inside the plasma. But actually, we don't need to because it's not the plasma that's inducing V toroidal. It's this whacking great transformer. So we can put our loop, say, above the plasma and just measure it here. It doesn't matter where the break is. It just matters that when we measure difference between these two, as you traverse a circuit around here, you're going to pick up some V toroidal, which is the same V toroidal, as I said, that's driving the pattern here. It will become clear why we're trying to measure this in a moment. It's actually very much related to this plasma resistivity here. So this is an important quantity that you might want to measure. OK. So let's set this up and have a go at measuring the plasma resistivity using just two loops of wire Rogowski and V phi. OK, so we consider a plasma. It has a volume V. And it's got a surface-- what do I call it-- partial V like that. OK, and our loops are outside of here. So both the ROG and the V phi are outside this surface. OK, good. Now, let's think about energy balance in this system-- so the amount of energy in the system, how it's changing in time, and how we're injecting energy into the system. So we're going to have a few different terms here. We're going to have an ohmic heating term, E dot J. So that's going to be related to the current inside this system and the electric field, which is related to the voltage. So we can say that this is going to be-- straight away, we can see, this is going to be something with I phi. And this is going to be something to do with V phi. There's also going to be a term related to the change in total magnetic energy inside the system. So the magnetic energy is B squared over 2 mu 0. And we want to have the time rate of change of that to make it a power. I'm going to write that slightly differently to make it clearer what's going on here. We want the time rate of change of the magnetic energy density, like this. And all of this is integrated over this volume, dV. And this is going to be equal to-- this change in internal energy of the system is going to be equal the energy that we're injecting into the system. We're injecting the energy into the system through this transformer. And this is going to be equal to 1 upon mu 0 the integral through this surface partial V of E cross B integrated over the surface. So this is the Poynting flux from electromagnetism. This is ohmic heating. And this is effectively the inductive power, the energy we have to spend to change the magnetic field. And these are balanced because we're going to be discussing a system in steady state. So we're not going to allow the energy to change without us putting some power into it. So I'm not sure I need that assumption, but there we go. OK, questions on this setup before we try and work out some of these terms? Yeah? STUDENT: You said it's [INAUDIBLE] JACK HARE: Yeah. STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so imagine we're microwaving a lump of chicken. This left-hand side here is the temperature, the thermal energy density of the chicken. So we're integrating over the chicken, OK? The right-hand side is the Poynting flux, which is the microwaves penetrating the surface of the chicken, OK? And so we're setting those two equal to each other. So this is the internal energy of the plasma. And this is the power that we're putting into the plasma. So this is not the internal energy. This is the change in internal energy as a function of time. That's an ohmic heating rate. That's a power. This is the time derivative of energy. So it's also a power. And this is a power integrated with [INAUDIBLE].. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: If we counterwound the B phi, we would measure nothing. STUDENT: [INAUDIBLE]? JACK HARE: Yes, you could have two oppositely wound loops in B phi. Yeah, sure. And yeah, any questions from Colombia? Yeah, Grant? STUDENT: [INAUDIBLE]? JACK HARE: We are not including radiation because we don't want to be here all day. Yes, OK, cool, but good question. Yes, there should be radiation. Where would radiation be? Left or right-hand side? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so if we included radiation, it would also be a Poynting flux. It'd be a Poynting flux going outwards instead. And we'd see that because we carefully examine E cross B. And we would see what was going out that way. It's in there. It's in there. I'm really not going to consider it, as you'll see in the next slide. Other questions, yes? STUDENT: [INAUDIBLE]? JACK HARE: Absolutely. So you can get current here from other effects. We're considering a very simple model of it. I mean, the bootstrap current you only get if there's some other current driven in the system. You have to have some other current, gradients of that current drive the bootstrap current. So you need to have something else. We could also have non-inductive current drive and all sorts of clever things. But none of those have really been demonstrated. So we're sticking with this. This also is a very simple model, which allows us to make some progress. Yeah? There was another hand up over there? No? Any other questions? We're going to try and make this slightly more comprehensible in a moment. I've already given you a hint about where some of these terms are going to come from. STUDENT: Hey, quick question. How did we measure the loop voltage? The toroidal voltage? JACK HARE: We measured the toroidal voltage in a really dumb way, which is we simply just put a loop of wire around the tokamak above the plasma. And the same loop voltage, which is induced inside the plasma, will also be induced inside this loop of wire. STUDENT: OK. JACK HARE: And so we are just using the wire as a proxy bit of plasma because we can conveniently stick electrodes into it, which we can't do with our plasma itself. And so we just have this loop here. That's what measures loop voltage for us. Cool. OK. So we now have a look at taking apart some of these terms here. The Poynting term-- in reality, this looks like electromagnetic radiation going in. So we could try and work it all out with E and B properly. But we can also just use some circuit theory and see what's going on there. So a circuit theory would tell us that the power is equal to I times V. And so we're going to decompose that. It's going to be V phi I phi-- so these are both in the toroidal direction. And there could be a poloidal component of this, V theta, I theta. There's unlikely to be a radial component of this. And that's because it's pretty hard to arrange for currents to go radially outwards because you violate the divergence theorem. So let's get rid of that. Now, it also turns out that, in something that's in steady state-- and so this could be a tokamak or a stellarator, or some other device like that-- this V theta would only come about if we had a change in the toroidal magnetic field in time. Because we're in steady state, our toroidal magnetic field is constant. We've set it with, say, our big superconducting magnets or something like that. That doesn't happen. So that's 0. And this whole term is 0. So as you might have predicted from the start, the energy that we're pushing in just from a circuit model is related to the voltage that we're dropping around the loop and the current that we're driving here. So this is just treating this as some ohmic resistor, which is dissipating some energy. There's nothing particularly plasma physics inside here. This all makes sense so far? OK. So then we can write power is going to be equal to still E dot J, which is the ohmic power still. And then we're going to set that equal to B phi I phi. So this is the Poynting flux. I'll just label that ohmic as well. And then I'm going to take this inductive term to the other side. So I'm going to put it as a minus sign here. And I'm going to write it as a partial derivative not of the integral of the magnetic energy density over the volume, but in terms, again, of circuit quantity. So this is a 1/2 L I phi squared, like that. So I'm running out of space. So where shall I write this? So this L here is the inductance. And it's a circuit property that probably many of you are familiar with. We're going to define our inductance here as 1 over mu 0 I phi squared integral of B theta squared dV, so integration of the volume. And if you look at that closely, you realize all I've done is something incredibly tautological. So I've just defined L to include all of this stuff that we still don't know. But this now looks a lot more familiar because we've got L. That contains everything we don't know. But I phi is stuff that we do know because we're measuring it with a recalcitrant. So don't worry about it. I haven't done anything. This is just sleight of hand at the moment. But we will resolve it. What we're going to resolve is try and work out what's going on with L here. Perhaps we can just get rid of it. OK. I just want to point out the exact value of L doesn't just depend on the current that's flowing inside here. It depends on the distribution of current. And so if I have, for example, a tokamak where the current density as a function of radius looks like this, that has a very different inductance from one where the current density looks like that. And so that L is a measure, in some sense, of the geometry of your system. It actually doesn't care about the strength of the current. It just cares about the distribution of the current. OK. And the other thing I want to say here is that the definition of L we're using here is sometimes referred to as the energy inductance. There's another version, which is called the flux inductance. And that one is defined as L is equal to the magnetic flux times the current. These definitions are often similar. They often give similar results. But actually, there are some really subtle differences. And if you want to go plunge into Jackson or some other equivalent electromagnetism textbook, you can have a look at it. But the purposes of this, if you've ever seen this definition of inductance, it's not the one we're using here today. We're using this definition inductance, which is the total magnetic energy squared [INAUDIBLE] total magnetic energy over the current squared instead. OK. Questions on this before we move on? Oh, we're running out of time, I see. Okey doke. Working with this equation here. OK. So we want to ask ourselves, what is the time derivative of a 1/2 L I phi squared, like that. Well, we can chain rule this. We'll get L I phi I phi dot. We'll get plus 1/2 I phi squared L dot. OK. So there's actually two terms here. We can have a change in the magnetic energy because this, again, just still represents the stored magnetic energy. It can change because the amount of current flowing through the system changes. That kind of makes sense. We know if we change the current, we're going to change the magnetic field. So we'll change the magnetic energy density. But we'll also get a change if we change L. So I gave you this example earlier, that L is related to the current distribution. So say, for example, you have a sawtooth crash or something like that inside your tokamak that redistributes current. That current redistribution is going to drive a change in L. And so that will change the magnetic energy. And so that magnetic energy will have to be paid for by, for example, the Poynting flux. So they'll have to be some balance here. But for our purposes, we're going to set this all equal to 0. We're going to set it equal to 0 because we're saying that this is 0 because we're in steady state. And slightly more weaker, we're going to just sort of say, yeah, this is roughly 0 compared to the other things. And the reason we're doing that is we can't set it to exactly 0. Otherwise, we won't be able to sense it with our Rogowski. And we will have completely failed in this task. But we're going to say that it's small enough that our Rogowski, which is super sensitive, can get it. But it's not big enough to actually cause a change to our overall picture here. So we're going to be able to get rid of this term entirely. So this means that the power balance for our plasma is just going to be external heating is equal to the external-- the externally injected power is going to go purely into ohmic heating our plasma. And so that means that the power coming in is going to be equal to the integral over the volume of the current density over the local conductivity of our plasma, sigma, integrated over the volume here. This is just a rearrangement where we say that J is equal to sigma E, like that. So we replace J dot E with J squared over sigma. And you will remember that, of course, in general, sigma is a tensor, especially in plasmas. It can be a tensor, which is very much not a multiple of the identity matrix. But here, we're just going to treat it as a scalar. And there can be some subtleties involved in this technique if you think about the tensor nature of this instead. OK. So now, we've got this squared here. And again, J is just going to be equal to I phi over pi A squared. I probably forgot to mention somewhere that our tokamak has a minor radius of A. But we use this a lot. So this is-- actually, sorry, I'm being slightly vague here. So this is like the average J here that we're going to work with. And there's going to be some subtlety between going through J squared averaged over the volume to just J average of the volume squared. I'm going to sweep that under the rug. We're just going to do the second one, yeah. And so that means we end up with an-- let's see here. We also have a volume of our system, which is pi A squared dotted with 2 pi capital R. That's the volume of a torus with minor radius and major radius R. OK, good. So that means we end up by being able to write our conductivity volume averaged conductivity as equal to I phi squared over the power times by-- this is going to be 2 pi R over pi A squared here. And this I phi squared over the power is just going to be equal to I phi over D phi, like this, still times by 2 R over A squared. So we can now measure the conductivity of our plasma because we have measured this using our Rogowski. We've measured this using our flux loop. And I sincerely hope we know the size of our tokamak. Otherwise, we've kind of lost the stuff. So this is a pretty neat result. And the reason it's neat is because of what theory tells us about this scalar conductivity. So theory tells us that the conductivity sigma is something like 2 times 10 to the 4 TE the 3/2 over z, which is we'll talk about it in a moment, but a bit like the atomic charge, but not quite here because I'm going to give it a little z sigma to show it's not quite the z you've been expecting-- and then times by the Coulomb logarithm, that way up. And this is in units of 1 over ohm meters. And I think, to get this coefficient out front, this is in units of electron volts. So if you look inside the NRL Plasma Formulary, you'll get this nice equation. So this is to do with the Spitzer resistive, which you saw in part 1. And so there's some kind of nice things in here. We've now measured sigma. The Coulomb logarithm is a very slowly changing function of the plasma parameters because it's in a logarithm. And for something like a tokamak, it's about 10. So you can use 10. You don't really care that much. This is interesting. So z, or z sigma, is always going to be greater than the true z of your plasma. It's a little bit like z effective, which you may have come across in-- yeah, we did it in the fusion class. It's great. It's a little bit like z effective, in that it also takes into account the fact that you have some impurities inside your plasma. And they increase the number of electrons available. But it's also not z effective. And if you want to understand what it is, you have to go do some detailed calculations where you take into account what those extra electrons from impurities are doing. However, if you have a perfectly pure plasma, like a pure hydrogen, or deuterium, or tritium, then you do end up with z sigma equals z equals 1. So let's say you've done a really good job and you have got the very nice and pure plasma. You could probably set that equal to 1 or at least 1-ish. Let's not worry about it too much. So that means, by measuring sigma-- by measuring sigma from your Rogowski and your flux loop and the geometry of your tokamak, you can get out the plasma temperature, which is not a bad measurement for two loops of wire. The volume average plasma temperature admittedly, which temperatures are going to be dominated by. It's going to be dominated by the cold plasma at the edge or the hot plasma at the core. [INAUDIBLE]? STUDENT: JACK HARE: Maybe. Any advances on that? STUDENT: [INAUDIBLE]. JACK HARE: Yeah, so the hot plasma is much more conductive. We aren't including inductance here. So inductance would force the plasma to the outside because it's going to want to reside on the skin of the conductor, if we've got a non-steady state system. But in steady state, the inductance doesn't matter. The current has time to soak through the entire conductor. So this would be a pretty good measure of where most of the current is flowing, which is in the core of the plasma. So that's actually pretty cool because this is a really hard thing to measure. I mean, the temperature is something that people invest a huge amount of money in doing. Now, this isn't a great measurement. These loops are not exactly the most accurate things. And of course, you've got a T to the 3/2, right? And T to the 3/2 means that any small error in sigma is going to be amplified a bit when we measure the temperature. But it is still remarkably easy to do and MC of plasma. So this, again, applies for any plasma. Let's see. It needs to be steady state. We've derived it with toroidal symmetry. But I don't see any reason why you couldn't try and redo this for like a z-pinch plasma. I think we probably have made some assumptions. I don't think we've made that many assumptions in terms of the axisymmetry of the plasma because the current has to be constant around it. So even if it's a stellarator, you still need one measurement with a Rogowski to get the current through the plasma. So yeah, questions on this? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so always when we say things like steady state and things like that, what we should say-- it's like saying large or something. We should come up with a dimensionless parameter which characterizes that. So what dimensionless parameter should we have to enable us to use the steady state assumption, which was dropping this term and actually also this term as well? We're talking about steady state. We're talking about times. So there should be some sort of timescale that we're measuring. So there should be like an inductive timescale here where we could form a timescale out of this. STUDENT: [INAUDIBLE]. JACK HARE: Yeah, rate of change of current or the current over the rate of change of the current will give you some timescale. So that would be a good timescale. But that needs to be small. That needs to be much, much less-- not much less than 1 because that has units of time. It needs to be smaller than the pulse length. So the current must change on the times-- I got it the wrong way around. The current must change on a time scale which is long compared to the current pulse. So it changes slowly, right? So if the current changes sufficiently slowly, you'll be able to use these equations. And if the change in current is, say, on the order of 10 to the minus 2, then you'll have errors on the order of 10 to the minus 2 or something in here because I think this is mostly linear. And that will go slightly wrong when you try and work out the temperature from sigma because it's been linear up to this point. And then we'll have some slight non-linearity. But you get the idea. So you with all of these things, whenever we make these assumptions, these assumptions can be justified. There'll be some dimensionless parameter that allows us to justify that assumption. Yeah, other questions? Anything from Colombia? STUDENT: [INAUDIBLE]? JACK HARE: Well, we're averaging-- really, what we're averaging is this quantity here, J dot E. So E is going to be practically uniform throughout the entire thing because we're driving this flux of the transformer. Or we're driving this potential as a transformer. That's not going to change very much. I'm not worried about that. J I agree is peaked in general. For your typical tokamak plasma, it's going to be peaked. So I think it's possible that we are biased towards the hotter temperatures because of that. And that's probably where it comes in, as I was saying, that they are the ones carrying the current. And so we're biased towards measuring their temperature. So yeah, it's also interesting, like what does it mean to measure an average temperature? Because if you have a small bit of plasma that's very hot but very low density, it has very little thermal energy density. And if you have a large amount of plasma that's cold, if you mix those together, like mixing hot water with cold water, you don't just average the temperature. You don't say, this is boiling, this is freezing, the average is 50. You would weight it by the number of particles you have. And so yeah, I think all of this is kind of slightly hand-wavy. And again, I think Hutchinson goes into a bit more detail and says that, given the accuracy that you can make these measurements with, some of the subtleties of the theory are not that important because your noise is larger than the inadequacy of your models. But it's a good question. If you really wanted to do this to measure temperature on like ITER, you might want to spend a little bit more time thinking about. Yeah, cool. STUDENT: [INAUDIBLE]? JACK HARE: Yes, absolutely. Yeah, so this is really just a steady state section, yeah. Cool. Yeah. OK. We've still got a little time. So we're going to keep going. Let me clear some space. So another use of this in MTF-- here, we have a system in which we create a well inside our magnetic pressure, which our plasma sits inside. So if I have a diagram like this, where this is the minor radius of our machine-- actually, just some spatial coordinates. I don't think I've drawn this correctly here. But we have some magnetic field pressure profile, like this. And we've got a little dip inside it. And that little dip is designed such that the thermal pressure of our plasma, E equals NT, sits inside it. And so the total pressure is constant here. So this means that the system is in pressure balance. It's in equilibrium. OK. The neat thing about that is, because the magnetic field is the thing that's allowing-- the magnetic field is the thing that's confining the pressure, by measuring the magnetic field in an MTF device, we can indirectly measure the pressure. And you'll remember that a pressure is a pretty key quantity for our fusion because we want to have something that goes like-- what's that nice propaganda formula? We've got like p squared-- anyway, whatever. I can't remember how they write it on the board in NW-17. But the fusion power is going to go as the pressure squared. So we'd like to know what the pressure is because we want to know what our fusion power is. And we can do that by making very careful measurements of the magnetic field. And we're going to talk about that in detail now. But we're going to get a little bit lost in the weeds. I'm not going to have a chance to finish it this lecture. So I just want you to focus on the fact that what we're trying to do is measure P, the pressure, from the magnetic field. So again, we're trying to get at some interesting quantities inside the plasma just using a few B-dot probes, which, again, is kind of cool. OK. So we'll start by looking at this in terms of a force balance. So our steady state MHD equation is J cross B minus the gradient of pressure equals 0, like that. If we insert Maxwell's equation-- one of Maxwell's equations, ignoring the displacement current-- so curl of B equals mu 0 J-- into this equation, we end up with something that we can eventually rewrite in the form minus the gradient of pressure plus B squared over 2 mu 0 plus 1 over mu 0 B dot the gradient of B equals 0. So this balance equation here-- again, it's equivalent to this one. We've just eliminated J-- has two sets of terms in it. These terms look like pressure gradients. The thermal pressure is definitely a pressure gradient. But the magnetic energy density also looks like a sort of magnetic pressure gradient. So these two terms look similar. And this term here is the tension force. Or you might want to call it the curvature force because you can see from B dot grad B, there's going to be some term associated with the curvature of the magnetic field. Now, this is a very hard equation to solve in general. So this is fully three dimensional. If you're trying to solve this in a fully three dimensional object, like a stellarator or even inside a tokamak where you haven't made any assumptions, this is very, very, very hard. So what we're going to do is solve this instead in a cylinder. So we're going to take our tokamak. We're going to cut. And we're going to unfold it. And we're going to turn it into a long, thin cylinder, like this. You can equivalently attempt to do this by making the aspect ratio of this very, very large. But rather than that, we're just going to do it this way around here. So in our standard tokamak toroidal geometry, we have the phi coordinates and the theta coordinate, like that, the toroidal and poloidal coordinates. And now, we're going to have the phi coordinates in this direction. So I may occasionally refer it to it as z instead because it's now cylindrical coordinates. And our theta coordinate is still going to wrap around our plasma, like that. So we're going to look at solutions to this equation in this geometry because it's much simpler. We'll be trying to find ways where we can use our B-dot probes to measure pressure in this geometry. It is possible to use all of these techniques in a full toroidal geometry. You just have to go back and solve this equation instead, much more complicated. Any questions on this before we keep going? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so you would have to solve this to get the [INAUDIBLE].. Or I think in [INAUDIBLE] you still assume axisymmetry. So you can still have a reduced form of this. But it's still very complicated. Yeah, exactly. So again, just for demonstrating how this works intuitively, we're going to use this very simple geometry. But these techniques still apply for real tokamak geometries as well. So don't think that this is only relevant to z-pinches. This is very much relevant to other stuff. I'm just going to make my z-pinch a little bit shorter so that I can fit in an equation underneath here because one of the things we're going to be interested in measuring is the magnetic field B theta here. And this magnetic field will take some arbitrary values. But as with any other function, we can decompose this. We can do a Fourier decomposition. And so we're going to do a Fourier decomposition on this magnetic field. And then we're going to talk about different terms in that Fourier decomposition as if they were separate things. They're not separate. They're just our way of representing this magnetic field. But it does allow us intuitively to get at some of the physics term by term. So if we write this down as a Fourier decomposition, we'll have a DC term, C0 over 2. The 2 here is just our convention when we're doing Fourier transforms. We've seen this before. And then we're going to have a sum over an index M from 1 to infinity. And we're going to have a term CM cosine of M theta plus SM sine of M theta, like this. So these are different coefficients, which we don't know. But of course, we can find them by doing the Fourier transform of this or the Fourier decomposition of this. And we do that by doing things like-- I'm sure you've seen this before, but just to remind you, this is going to be 1 over pi the integral of 0 to 2 pi of B theta cosine of M theta B theta and similarly for the sine component. OK. So we have our signal. We can decompose it into these modes. And then we can give these modes names, right? And so we're going to particularly consider the modes M equals 0, which is this term, and the M equals 1 mode here. These are the lowest order modes in our system. There are modes all the way up to infinity. But these are the ones which are usually the most serious in terms of plasma instabilities. And they're also the ones that provide the most information. So once we've measured our poloidal magnetic field, we've done this decomposition, we can learn an awful lot by studying first the M equals 0 component of the magnetic field, and then the M equals 1 component. If you want to, you can keep extending this analysis all the way up to infinity. But generally, people tend to get bored before they get there. So cool. In the homework, which has now uploaded, you are asked to sketch these modes. Does anyone want to tell me what shapes these two first modes give us? Yeah? STUDENT: [INAUDIBLE] JACK HARE: OK, M equals 0 is a nice circle, like that. So this is a symmetric magnetic field. It doesn't depend on the azimuthal or deployed line here, yeah. What about the M equals 1? Hm? STUDENT: [INAUDIBLE]. JACK HARE: Oval. OK. Is it an oval like this? Or an oval like this? STUDENT: [INAUDIBLE]. JACK HARE: Sorry? STUDENT: [INAUDIBLE]. JACK HARE: Right, OK. It's here. So which oval is it? STUDENT: [INAUDIBLE]. JACK HARE: It's a trick question. Come on. STUDENT: [INAUDIBLE]. JACK HARE: It's neither. STUDENT: [INAUDIBLE]. JACK HARE: Yeah, exactly. So that's the M equals 2. Cool. OK, so you're telling me the M equals 1 mode is shifted. So if I had my initial position here, it's now up like this. Or is it like this? Or is it like this? Which of these is it? STUDENT: [INAUDIBLE]. JACK HARE: Yes. OK, so it depends on CM and SM. These are effectively telling you which direction it's going to be shifted in. For example, CM is probably in this direction. And SM is in this direction. So your relative contributions of these two is going to tell you how much your plasma has shifted. So straight away, that is a useful measurement. This measurement straight away is going to tell you if your plasma is moving. Perhaps, if your plasma is moving, it's going to hit the wall. And you can do some active feedback and stop it from doing that. But it turns out we'll learn even more by looking at the M equals 1, mode even for plasmas which are relatively stable. OK. We are coming up on the end of this session. So I don't want to start on the next section, which is going to be using M equals 0 to measure diamagnetism. But I am happy to take any questions for the next few minutes before we pause today. Any questions? Yes? STUDENT: [INAUDIBLE]? JACK HARE: We're totally neglecting-- are we keeping the tension term in there if we're [INAUDIBLE] tension? We neglect the toroidal tension. The only thing that remains is the poloidal tension. These magnetic field lines themselves have curvature, right? So we keep the poloidal tension. We get rid of the toroidal tension. So this is the same. This is why I said, if you make this very, very large aspect ratio, so the curvature is very small, you also get a similar kind of picture. So that is something that is missing from this model that may or may not be important in a real system. OK, yeah, it's a good question.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.